00:00:00.001 Started by upstream project "autotest-nightly" build number 3333 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.102 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.157 > git --version # 'git version 2.39.2' 00:00:00.157 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.471 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.481 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.491 Checking out Revision 98d6b8327afc23a73b335b56c2817216b73f106d (FETCH_HEAD) 00:00:09.491 > git config core.sparsecheckout # timeout=10 00:00:09.503 > git read-tree -mu HEAD # timeout=10 00:00:09.517 > git checkout -f 98d6b8327afc23a73b335b56c2817216b73f106d # timeout=5 00:00:09.538 Commit message: "jenkins/jjb-config: Retab check_jenkins_labels.sh" 00:00:09.538 > git rev-list --no-walk 98d6b8327afc23a73b335b56c2817216b73f106d # timeout=10 00:00:09.632 [Pipeline] Start of Pipeline 00:00:09.647 [Pipeline] library 00:00:09.648 Loading library shm_lib@master 00:00:09.649 Library shm_lib@master is cached. Copying from home. 00:00:09.667 [Pipeline] node 00:00:09.677 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.679 [Pipeline] { 00:00:09.691 [Pipeline] catchError 00:00:09.693 [Pipeline] { 00:00:09.708 [Pipeline] wrap 00:00:09.719 [Pipeline] { 00:00:09.728 [Pipeline] stage 00:00:09.729 [Pipeline] { (Prologue) 00:00:09.907 [Pipeline] sh 00:00:10.192 + logger -p user.info -t JENKINS-CI 00:00:10.208 [Pipeline] echo 00:00:10.209 Node: WFP3 00:00:10.217 [Pipeline] sh 00:00:10.515 [Pipeline] setCustomBuildProperty 00:00:10.527 [Pipeline] echo 00:00:10.529 Cleanup processes 00:00:10.533 [Pipeline] sh 00:00:10.815 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.815 1975480 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.828 [Pipeline] sh 00:00:11.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.112 ++ grep -v 'sudo pgrep' 00:00:11.112 ++ awk '{print $1}' 00:00:11.112 + sudo kill -9 00:00:11.112 + true 00:00:11.127 [Pipeline] cleanWs 00:00:11.137 [WS-CLEANUP] Deleting project workspace... 00:00:11.137 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.143 [WS-CLEANUP] done 00:00:11.148 [Pipeline] setCustomBuildProperty 00:00:11.164 [Pipeline] sh 00:00:11.445 + sudo git config --global --replace-all safe.directory '*' 00:00:11.513 [Pipeline] nodesByLabel 00:00:11.514 Found a total of 1 nodes with the 'sorcerer' label 00:00:11.525 [Pipeline] httpRequest 00:00:11.529 HttpMethod: GET 00:00:11.530 URL: http://10.211.11.40/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:11.535 Sending request to url: http://10.211.11.40/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:11.555 Response Code: HTTP/1.1 200 OK 00:00:11.556 Success: Status code 200 is in the accepted range: 200,404 00:00:11.557 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:17.136 [Pipeline] sh 00:00:17.420 + tar --no-same-owner -xf jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:17.438 [Pipeline] httpRequest 00:00:17.442 HttpMethod: GET 00:00:17.443 URL: http://10.211.11.40/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:00:17.444 Sending request to url: http://10.211.11.40/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:00:17.459 Response Code: HTTP/1.1 200 OK 00:00:17.460 Success: Status code 200 is in the accepted range: 200,404 00:00:17.461 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:00:59.656 [Pipeline] sh 00:00:59.938 + tar --no-same-owner -xf spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:01:02.521 [Pipeline] sh 00:01:02.804 + git -C spdk log --oneline -n5 00:01:02.804 3bec6cb23 module/bdev: Fix -Werror=maybe-uninitialized instances under raid/* 00:01:02.804 cbeecee61 nvme: use array index to get pointer for MAKE_DIGEST_WORD 00:01:02.804 f8fe0c418 test/unit/lib/nvme: initialize qpair in test_nvme_allocate_request_null() 00:01:02.804 744b9950e app/spdk_dd: dd was freezing with empty input file and count/skip flags 00:01:02.804 156969520 lib/trace : Display names for user created threads 00:01:02.816 [Pipeline] } 00:01:02.833 [Pipeline] // stage 00:01:02.842 [Pipeline] stage 00:01:02.844 [Pipeline] { (Prepare) 00:01:02.861 [Pipeline] writeFile 00:01:02.878 [Pipeline] sh 00:01:03.161 + logger -p user.info -t JENKINS-CI 00:01:03.174 [Pipeline] sh 00:01:03.459 + logger -p user.info -t JENKINS-CI 00:01:03.514 [Pipeline] sh 00:01:03.797 + cat autorun-spdk.conf 00:01:03.797 RUN_NIGHTLY=1 00:01:03.797 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.797 SPDK_TEST_NVMF=1 00:01:03.797 SPDK_TEST_NVME_CLI=1 00:01:03.797 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.797 SPDK_TEST_NVMF_NICS=e810 00:01:03.797 SPDK_RUN_UBSAN=1 00:01:03.804 NET_TYPE=phy 00:01:03.809 [Pipeline] readFile 00:01:03.832 [Pipeline] withEnv 00:01:03.834 [Pipeline] { 00:01:03.847 [Pipeline] sh 00:01:04.131 + set -ex 00:01:04.131 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.131 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.131 ++ RUN_NIGHTLY=1 00:01:04.131 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.131 ++ SPDK_TEST_NVMF=1 00:01:04.131 ++ SPDK_TEST_NVME_CLI=1 00:01:04.131 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.131 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.131 ++ SPDK_RUN_UBSAN=1 00:01:04.131 ++ NET_TYPE=phy 00:01:04.131 + case $SPDK_TEST_NVMF_NICS in 00:01:04.131 + DRIVERS=ice 00:01:04.131 + [[ tcp == \r\d\m\a ]] 00:01:04.131 + [[ -n ice ]] 00:01:04.131 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.131 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.131 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.131 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.131 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.131 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.131 + true 00:01:04.131 + for D in $DRIVERS 00:01:04.131 + sudo modprobe ice 00:01:04.131 + exit 0 00:01:04.141 [Pipeline] } 00:01:04.158 [Pipeline] // withEnv 00:01:04.163 [Pipeline] } 00:01:04.180 [Pipeline] // stage 00:01:04.189 [Pipeline] catchError 00:01:04.190 [Pipeline] { 00:01:04.205 [Pipeline] timeout 00:01:04.205 Timeout set to expire in 40 min 00:01:04.206 [Pipeline] { 00:01:04.221 [Pipeline] stage 00:01:04.223 [Pipeline] { (Tests) 00:01:04.242 [Pipeline] sh 00:01:04.525 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.525 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.525 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.525 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:04.525 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.525 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.525 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:04.525 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.525 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:04.525 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:04.525 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:04.525 + source /etc/os-release 00:01:04.525 ++ NAME='Fedora Linux' 00:01:04.525 ++ VERSION='38 (Cloud Edition)' 00:01:04.525 ++ ID=fedora 00:01:04.525 ++ VERSION_ID=38 00:01:04.525 ++ VERSION_CODENAME= 00:01:04.525 ++ PLATFORM_ID=platform:f38 00:01:04.525 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:04.525 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.525 ++ LOGO=fedora-logo-icon 00:01:04.525 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:04.525 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.525 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:04.525 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.525 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.525 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.525 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:04.525 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.525 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:04.525 ++ SUPPORT_END=2024-05-14 00:01:04.525 ++ VARIANT='Cloud Edition' 00:01:04.525 ++ VARIANT_ID=cloud 00:01:04.525 + uname -a 00:01:04.525 Linux spdk-wfp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:04.525 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.064 Hugepages 00:01:07.064 node hugesize free / total 00:01:07.064 node0 1048576kB 0 / 0 00:01:07.064 node0 2048kB 0 / 0 00:01:07.064 node1 1048576kB 0 / 0 00:01:07.064 node1 2048kB 0 / 0 00:01:07.064 00:01:07.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.064 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:07.064 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:07.064 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:07.064 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:07.064 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:07.064 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:07.064 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:07.065 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:07.065 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:07.065 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:07.065 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:07.065 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:07.065 + rm -f /tmp/spdk-ld-path 00:01:07.065 + source autorun-spdk.conf 00:01:07.065 ++ RUN_NIGHTLY=1 00:01:07.065 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.065 ++ SPDK_TEST_NVMF=1 00:01:07.065 ++ SPDK_TEST_NVME_CLI=1 00:01:07.065 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.065 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.065 ++ SPDK_RUN_UBSAN=1 00:01:07.065 ++ NET_TYPE=phy 00:01:07.065 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.065 + [[ -n '' ]] 00:01:07.065 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.065 + for M in /var/spdk/build-*-manifest.txt 00:01:07.065 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.065 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.065 + for M in /var/spdk/build-*-manifest.txt 00:01:07.065 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.065 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.065 ++ uname 00:01:07.065 + [[ Linux == \L\i\n\u\x ]] 00:01:07.065 + sudo dmesg -T 00:01:07.325 + sudo dmesg --clear 00:01:07.325 + dmesg_pid=1976500 00:01:07.325 + [[ Fedora Linux == FreeBSD ]] 00:01:07.325 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.325 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.325 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.325 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.325 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.325 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.325 + sudo dmesg -Tw 00:01:07.325 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.325 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.325 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.325 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.325 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.325 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.325 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.325 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.325 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.325 Test configuration: 00:01:07.325 RUN_NIGHTLY=1 00:01:07.325 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.325 SPDK_TEST_NVMF=1 00:01:07.325 SPDK_TEST_NVME_CLI=1 00:01:07.325 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.325 SPDK_TEST_NVMF_NICS=e810 00:01:07.325 SPDK_RUN_UBSAN=1 00:01:07.325 NET_TYPE=phy 08:01:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:07.325 08:01:40 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:07.325 08:01:40 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:07.325 08:01:40 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:07.325 08:01:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.325 08:01:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.325 08:01:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.325 08:01:40 -- paths/export.sh@5 -- $ export PATH 00:01:07.325 08:01:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.325 08:01:40 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:07.325 08:01:40 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:07.325 08:01:40 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707807700.XXXXXX 00:01:07.325 08:01:40 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707807700.pM72g3 00:01:07.325 08:01:40 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:07.325 08:01:40 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:07.325 08:01:40 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:07.325 08:01:40 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:07.326 08:01:40 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:07.326 08:01:40 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:07.326 08:01:40 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:07.326 08:01:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.326 08:01:40 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:07.326 08:01:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.326 08:01:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.326 08:01:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.326 08:01:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.326 Tue Feb 13 07:01:40 AM UTC 2024 00:01:07.326 08:01:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.326 v24.05-pre-72-g3bec6cb23 00:01:07.326 08:01:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.326 08:01:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.326 08:01:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.326 08:01:40 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:07.326 08:01:40 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:07.326 08:01:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.326 ************************************ 00:01:07.326 START TEST ubsan 00:01:07.326 ************************************ 00:01:07.326 08:01:40 -- common/autotest_common.sh@1102 -- $ echo 'using ubsan' 00:01:07.326 using ubsan 00:01:07.326 00:01:07.326 real 0m0.000s 00:01:07.326 user 0m0.000s 00:01:07.326 sys 0m0.000s 00:01:07.326 08:01:40 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.326 08:01:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.326 ************************************ 00:01:07.326 END TEST ubsan 00:01:07.326 ************************************ 00:01:07.326 08:01:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.326 08:01:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.326 08:01:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.326 08:01:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:07.585 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.585 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:07.844 Using 'verbs' RDMA provider 00:01:20.629 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:30.617 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:30.876 Creating mk/config.mk...done. 00:01:30.876 Creating mk/cc.flags.mk...done. 00:01:30.876 Type 'make' to build. 00:01:30.876 08:02:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:30.876 08:02:04 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:30.876 08:02:04 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:30.876 08:02:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.876 ************************************ 00:01:30.876 START TEST make 00:01:30.876 ************************************ 00:01:30.876 08:02:04 -- common/autotest_common.sh@1102 -- $ make -j96 00:01:31.134 make[1]: Nothing to be done for 'all'. 00:01:39.289 The Meson build system 00:01:39.289 Version: 1.3.1 00:01:39.289 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.289 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.289 Build type: native build 00:01:39.289 Program cat found: YES (/usr/bin/cat) 00:01:39.289 Project name: DPDK 00:01:39.289 Project version: 23.11.0 00:01:39.289 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.289 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.289 Host machine cpu family: x86_64 00:01:39.289 Host machine cpu: x86_64 00:01:39.289 Message: ## Building in Developer Mode ## 00:01:39.289 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.289 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.289 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.289 Program python3 found: YES (/usr/bin/python3) 00:01:39.289 Program cat found: YES (/usr/bin/cat) 00:01:39.289 Compiler for C supports arguments -march=native: YES 00:01:39.289 Checking for size of "void *" : 8 00:01:39.289 Checking for size of "void *" : 8 (cached) 00:01:39.289 Library m found: YES 00:01:39.289 Library numa found: YES 00:01:39.289 Has header "numaif.h" : YES 00:01:39.289 Library fdt found: NO 00:01:39.289 Library execinfo found: NO 00:01:39.289 Has header "execinfo.h" : YES 00:01:39.289 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.289 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.289 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.289 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.289 Run-time dependency openssl found: YES 3.0.9 00:01:39.289 Run-time dependency libpcap found: YES 1.10.4 00:01:39.289 Has header "pcap.h" with dependency libpcap: YES 00:01:39.289 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.289 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.289 Compiler for C supports arguments -Wformat: YES 00:01:39.289 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.289 Compiler for C supports arguments -Wformat-security: NO 00:01:39.289 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.289 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.289 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.289 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.289 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.289 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.289 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.289 Compiler for C supports arguments -Wundef: YES 00:01:39.289 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.289 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.289 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.289 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.289 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.289 Program objdump found: YES (/usr/bin/objdump) 00:01:39.289 Compiler for C supports arguments -mavx512f: YES 00:01:39.289 Checking if "AVX512 checking" compiles: YES 00:01:39.289 Fetching value of define "__SSE4_2__" : 1 00:01:39.289 Fetching value of define "__AES__" : 1 00:01:39.289 Fetching value of define "__AVX__" : 1 00:01:39.289 Fetching value of define "__AVX2__" : 1 00:01:39.289 Fetching value of define "__AVX512BW__" : 1 00:01:39.289 Fetching value of define "__AVX512CD__" : 1 00:01:39.289 Fetching value of define "__AVX512DQ__" : 1 00:01:39.289 Fetching value of define "__AVX512F__" : 1 00:01:39.289 Fetching value of define "__AVX512VL__" : 1 00:01:39.289 Fetching value of define "__PCLMUL__" : 1 00:01:39.289 Fetching value of define "__RDRND__" : 1 00:01:39.289 Fetching value of define "__RDSEED__" : 1 00:01:39.289 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.289 Fetching value of define "__znver1__" : (undefined) 00:01:39.289 Fetching value of define "__znver2__" : (undefined) 00:01:39.289 Fetching value of define "__znver3__" : (undefined) 00:01:39.289 Fetching value of define "__znver4__" : (undefined) 00:01:39.289 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.289 Message: lib/log: Defining dependency "log" 00:01:39.289 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.289 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.289 Checking for function "getentropy" : NO 00:01:39.289 Message: lib/eal: Defining dependency "eal" 00:01:39.289 Message: lib/ring: Defining dependency "ring" 00:01:39.289 Message: lib/rcu: Defining dependency "rcu" 00:01:39.289 Message: lib/mempool: Defining dependency "mempool" 00:01:39.289 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.289 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.289 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.289 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.289 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.289 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.289 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:39.289 Compiler for C supports arguments -mpclmul: YES 00:01:39.289 Compiler for C supports arguments -maes: YES 00:01:39.289 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.289 Compiler for C supports arguments -mavx512bw: YES 00:01:39.289 Compiler for C supports arguments -mavx512dq: YES 00:01:39.289 Compiler for C supports arguments -mavx512vl: YES 00:01:39.289 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.289 Compiler for C supports arguments -mavx2: YES 00:01:39.289 Compiler for C supports arguments -mavx: YES 00:01:39.289 Message: lib/net: Defining dependency "net" 00:01:39.289 Message: lib/meter: Defining dependency "meter" 00:01:39.289 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.289 Message: lib/pci: Defining dependency "pci" 00:01:39.289 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.289 Message: lib/hash: Defining dependency "hash" 00:01:39.289 Message: lib/timer: Defining dependency "timer" 00:01:39.289 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.289 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.289 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.289 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.289 Message: lib/power: Defining dependency "power" 00:01:39.289 Message: lib/reorder: Defining dependency "reorder" 00:01:39.289 Message: lib/security: Defining dependency "security" 00:01:39.289 Has header "linux/userfaultfd.h" : YES 00:01:39.289 Has header "linux/vduse.h" : YES 00:01:39.289 Message: lib/vhost: Defining dependency "vhost" 00:01:39.289 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.289 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.289 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.289 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.289 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.289 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.289 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.289 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.289 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.289 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.289 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.289 Configuring doxy-api-html.conf using configuration 00:01:39.289 Configuring doxy-api-man.conf using configuration 00:01:39.289 Program mandb found: YES (/usr/bin/mandb) 00:01:39.289 Program sphinx-build found: NO 00:01:39.289 Configuring rte_build_config.h using configuration 00:01:39.289 Message: 00:01:39.289 ================= 00:01:39.289 Applications Enabled 00:01:39.289 ================= 00:01:39.289 00:01:39.289 apps: 00:01:39.289 00:01:39.289 00:01:39.289 Message: 00:01:39.289 ================= 00:01:39.289 Libraries Enabled 00:01:39.289 ================= 00:01:39.289 00:01:39.289 libs: 00:01:39.289 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.289 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.289 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.289 00:01:39.289 Message: 00:01:39.289 =============== 00:01:39.289 Drivers Enabled 00:01:39.289 =============== 00:01:39.289 00:01:39.289 common: 00:01:39.289 00:01:39.289 bus: 00:01:39.289 pci, vdev, 00:01:39.289 mempool: 00:01:39.289 ring, 00:01:39.289 dma: 00:01:39.289 00:01:39.289 net: 00:01:39.289 00:01:39.289 crypto: 00:01:39.289 00:01:39.289 compress: 00:01:39.289 00:01:39.289 vdpa: 00:01:39.289 00:01:39.289 00:01:39.289 Message: 00:01:39.289 ================= 00:01:39.289 Content Skipped 00:01:39.289 ================= 00:01:39.289 00:01:39.289 apps: 00:01:39.289 dumpcap: explicitly disabled via build config 00:01:39.289 graph: explicitly disabled via build config 00:01:39.289 pdump: explicitly disabled via build config 00:01:39.289 proc-info: explicitly disabled via build config 00:01:39.289 test-acl: explicitly disabled via build config 00:01:39.289 test-bbdev: explicitly disabled via build config 00:01:39.289 test-cmdline: explicitly disabled via build config 00:01:39.290 test-compress-perf: explicitly disabled via build config 00:01:39.290 test-crypto-perf: explicitly disabled via build config 00:01:39.290 test-dma-perf: explicitly disabled via build config 00:01:39.290 test-eventdev: explicitly disabled via build config 00:01:39.290 test-fib: explicitly disabled via build config 00:01:39.290 test-flow-perf: explicitly disabled via build config 00:01:39.290 test-gpudev: explicitly disabled via build config 00:01:39.290 test-mldev: explicitly disabled via build config 00:01:39.290 test-pipeline: explicitly disabled via build config 00:01:39.290 test-pmd: explicitly disabled via build config 00:01:39.290 test-regex: explicitly disabled via build config 00:01:39.290 test-sad: explicitly disabled via build config 00:01:39.290 test-security-perf: explicitly disabled via build config 00:01:39.290 00:01:39.290 libs: 00:01:39.290 metrics: explicitly disabled via build config 00:01:39.290 acl: explicitly disabled via build config 00:01:39.290 bbdev: explicitly disabled via build config 00:01:39.290 bitratestats: explicitly disabled via build config 00:01:39.290 bpf: explicitly disabled via build config 00:01:39.290 cfgfile: explicitly disabled via build config 00:01:39.290 distributor: explicitly disabled via build config 00:01:39.290 efd: explicitly disabled via build config 00:01:39.290 eventdev: explicitly disabled via build config 00:01:39.290 dispatcher: explicitly disabled via build config 00:01:39.290 gpudev: explicitly disabled via build config 00:01:39.290 gro: explicitly disabled via build config 00:01:39.290 gso: explicitly disabled via build config 00:01:39.290 ip_frag: explicitly disabled via build config 00:01:39.290 jobstats: explicitly disabled via build config 00:01:39.290 latencystats: explicitly disabled via build config 00:01:39.290 lpm: explicitly disabled via build config 00:01:39.290 member: explicitly disabled via build config 00:01:39.290 pcapng: explicitly disabled via build config 00:01:39.290 rawdev: explicitly disabled via build config 00:01:39.290 regexdev: explicitly disabled via build config 00:01:39.290 mldev: explicitly disabled via build config 00:01:39.290 rib: explicitly disabled via build config 00:01:39.290 sched: explicitly disabled via build config 00:01:39.290 stack: explicitly disabled via build config 00:01:39.290 ipsec: explicitly disabled via build config 00:01:39.290 pdcp: explicitly disabled via build config 00:01:39.290 fib: explicitly disabled via build config 00:01:39.290 port: explicitly disabled via build config 00:01:39.290 pdump: explicitly disabled via build config 00:01:39.290 table: explicitly disabled via build config 00:01:39.290 pipeline: explicitly disabled via build config 00:01:39.290 graph: explicitly disabled via build config 00:01:39.290 node: explicitly disabled via build config 00:01:39.290 00:01:39.290 drivers: 00:01:39.290 common/cpt: not in enabled drivers build config 00:01:39.290 common/dpaax: not in enabled drivers build config 00:01:39.290 common/iavf: not in enabled drivers build config 00:01:39.290 common/idpf: not in enabled drivers build config 00:01:39.290 common/mvep: not in enabled drivers build config 00:01:39.290 common/octeontx: not in enabled drivers build config 00:01:39.290 bus/auxiliary: not in enabled drivers build config 00:01:39.290 bus/cdx: not in enabled drivers build config 00:01:39.290 bus/dpaa: not in enabled drivers build config 00:01:39.290 bus/fslmc: not in enabled drivers build config 00:01:39.290 bus/ifpga: not in enabled drivers build config 00:01:39.290 bus/platform: not in enabled drivers build config 00:01:39.290 bus/vmbus: not in enabled drivers build config 00:01:39.290 common/cnxk: not in enabled drivers build config 00:01:39.290 common/mlx5: not in enabled drivers build config 00:01:39.290 common/nfp: not in enabled drivers build config 00:01:39.290 common/qat: not in enabled drivers build config 00:01:39.290 common/sfc_efx: not in enabled drivers build config 00:01:39.290 mempool/bucket: not in enabled drivers build config 00:01:39.290 mempool/cnxk: not in enabled drivers build config 00:01:39.290 mempool/dpaa: not in enabled drivers build config 00:01:39.290 mempool/dpaa2: not in enabled drivers build config 00:01:39.290 mempool/octeontx: not in enabled drivers build config 00:01:39.290 mempool/stack: not in enabled drivers build config 00:01:39.290 dma/cnxk: not in enabled drivers build config 00:01:39.290 dma/dpaa: not in enabled drivers build config 00:01:39.290 dma/dpaa2: not in enabled drivers build config 00:01:39.290 dma/hisilicon: not in enabled drivers build config 00:01:39.290 dma/idxd: not in enabled drivers build config 00:01:39.290 dma/ioat: not in enabled drivers build config 00:01:39.290 dma/skeleton: not in enabled drivers build config 00:01:39.290 net/af_packet: not in enabled drivers build config 00:01:39.290 net/af_xdp: not in enabled drivers build config 00:01:39.290 net/ark: not in enabled drivers build config 00:01:39.290 net/atlantic: not in enabled drivers build config 00:01:39.290 net/avp: not in enabled drivers build config 00:01:39.290 net/axgbe: not in enabled drivers build config 00:01:39.290 net/bnx2x: not in enabled drivers build config 00:01:39.290 net/bnxt: not in enabled drivers build config 00:01:39.290 net/bonding: not in enabled drivers build config 00:01:39.290 net/cnxk: not in enabled drivers build config 00:01:39.290 net/cpfl: not in enabled drivers build config 00:01:39.290 net/cxgbe: not in enabled drivers build config 00:01:39.290 net/dpaa: not in enabled drivers build config 00:01:39.290 net/dpaa2: not in enabled drivers build config 00:01:39.290 net/e1000: not in enabled drivers build config 00:01:39.290 net/ena: not in enabled drivers build config 00:01:39.290 net/enetc: not in enabled drivers build config 00:01:39.290 net/enetfec: not in enabled drivers build config 00:01:39.290 net/enic: not in enabled drivers build config 00:01:39.290 net/failsafe: not in enabled drivers build config 00:01:39.290 net/fm10k: not in enabled drivers build config 00:01:39.290 net/gve: not in enabled drivers build config 00:01:39.290 net/hinic: not in enabled drivers build config 00:01:39.290 net/hns3: not in enabled drivers build config 00:01:39.290 net/i40e: not in enabled drivers build config 00:01:39.290 net/iavf: not in enabled drivers build config 00:01:39.290 net/ice: not in enabled drivers build config 00:01:39.290 net/idpf: not in enabled drivers build config 00:01:39.290 net/igc: not in enabled drivers build config 00:01:39.290 net/ionic: not in enabled drivers build config 00:01:39.290 net/ipn3ke: not in enabled drivers build config 00:01:39.290 net/ixgbe: not in enabled drivers build config 00:01:39.290 net/mana: not in enabled drivers build config 00:01:39.290 net/memif: not in enabled drivers build config 00:01:39.290 net/mlx4: not in enabled drivers build config 00:01:39.290 net/mlx5: not in enabled drivers build config 00:01:39.290 net/mvneta: not in enabled drivers build config 00:01:39.290 net/mvpp2: not in enabled drivers build config 00:01:39.290 net/netvsc: not in enabled drivers build config 00:01:39.290 net/nfb: not in enabled drivers build config 00:01:39.290 net/nfp: not in enabled drivers build config 00:01:39.290 net/ngbe: not in enabled drivers build config 00:01:39.290 net/null: not in enabled drivers build config 00:01:39.290 net/octeontx: not in enabled drivers build config 00:01:39.290 net/octeon_ep: not in enabled drivers build config 00:01:39.290 net/pcap: not in enabled drivers build config 00:01:39.290 net/pfe: not in enabled drivers build config 00:01:39.290 net/qede: not in enabled drivers build config 00:01:39.290 net/ring: not in enabled drivers build config 00:01:39.290 net/sfc: not in enabled drivers build config 00:01:39.290 net/softnic: not in enabled drivers build config 00:01:39.290 net/tap: not in enabled drivers build config 00:01:39.290 net/thunderx: not in enabled drivers build config 00:01:39.290 net/txgbe: not in enabled drivers build config 00:01:39.290 net/vdev_netvsc: not in enabled drivers build config 00:01:39.290 net/vhost: not in enabled drivers build config 00:01:39.290 net/virtio: not in enabled drivers build config 00:01:39.290 net/vmxnet3: not in enabled drivers build config 00:01:39.290 raw/*: missing internal dependency, "rawdev" 00:01:39.290 crypto/armv8: not in enabled drivers build config 00:01:39.290 crypto/bcmfs: not in enabled drivers build config 00:01:39.290 crypto/caam_jr: not in enabled drivers build config 00:01:39.290 crypto/ccp: not in enabled drivers build config 00:01:39.290 crypto/cnxk: not in enabled drivers build config 00:01:39.290 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.290 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.290 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.290 crypto/mlx5: not in enabled drivers build config 00:01:39.290 crypto/mvsam: not in enabled drivers build config 00:01:39.290 crypto/nitrox: not in enabled drivers build config 00:01:39.290 crypto/null: not in enabled drivers build config 00:01:39.290 crypto/octeontx: not in enabled drivers build config 00:01:39.290 crypto/openssl: not in enabled drivers build config 00:01:39.290 crypto/scheduler: not in enabled drivers build config 00:01:39.290 crypto/uadk: not in enabled drivers build config 00:01:39.290 crypto/virtio: not in enabled drivers build config 00:01:39.290 compress/isal: not in enabled drivers build config 00:01:39.290 compress/mlx5: not in enabled drivers build config 00:01:39.290 compress/octeontx: not in enabled drivers build config 00:01:39.290 compress/zlib: not in enabled drivers build config 00:01:39.290 regex/*: missing internal dependency, "regexdev" 00:01:39.290 ml/*: missing internal dependency, "mldev" 00:01:39.290 vdpa/ifc: not in enabled drivers build config 00:01:39.290 vdpa/mlx5: not in enabled drivers build config 00:01:39.290 vdpa/nfp: not in enabled drivers build config 00:01:39.290 vdpa/sfc: not in enabled drivers build config 00:01:39.290 event/*: missing internal dependency, "eventdev" 00:01:39.290 baseband/*: missing internal dependency, "bbdev" 00:01:39.290 gpu/*: missing internal dependency, "gpudev" 00:01:39.290 00:01:39.290 00:01:39.290 Build targets in project: 85 00:01:39.290 00:01:39.290 DPDK 23.11.0 00:01:39.290 00:01:39.290 User defined options 00:01:39.290 buildtype : debug 00:01:39.290 default_library : shared 00:01:39.290 libdir : lib 00:01:39.290 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.290 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:39.290 c_link_args : 00:01:39.290 cpu_instruction_set: native 00:01:39.290 disable_apps : test-regex,test-sad,test-gpudev,dumpcap,test-fib,proc-info,graph,test-compress-perf,pdump,test-acl,test-security-perf,test,test-pmd,test-crypto-perf,test-eventdev,test-flow-perf,test-dma-perf,test-mldev,test-pipeline,test-cmdline,test-bbdev 00:01:39.291 disable_libs : pdcp,jobstats,gpudev,cfgfile,distributor,graph,stack,pdump,bbdev,fib,bpf,ipsec,eventdev,node,mldev,metrics,gso,dispatcher,lpm,table,bitratestats,member,port,regexdev,latencystats,rib,pcapng,sched,pipeline,efd,rawdev,acl,ip_frag,gro 00:01:39.291 enable_docs : false 00:01:39.291 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.291 enable_kmods : false 00:01:39.291 tests : false 00:01:39.291 00:01:39.291 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.291 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.291 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.291 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.291 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.291 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.553 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.553 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.553 [7/265] Linking static target lib/librte_kvargs.a 00:01:39.553 [8/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.553 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.553 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.553 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.553 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.553 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.553 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.553 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.553 [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.553 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.553 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.553 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.553 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.553 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.553 [22/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.553 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.553 [24/265] Linking static target lib/librte_log.a 00:01:39.553 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.553 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.553 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.553 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.553 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.553 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.553 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.553 [32/265] Linking static target lib/librte_pci.a 00:01:39.553 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.553 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.824 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.824 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.824 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.824 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.085 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.085 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.085 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.085 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.085 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.085 [44/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.085 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.085 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.085 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.085 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.085 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.085 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.085 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.085 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.085 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.085 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.085 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.085 [56/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.085 [57/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.085 [58/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.085 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.085 [60/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.085 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.085 [62/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.085 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.085 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.085 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.085 [66/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.085 [67/265] Linking static target lib/librte_meter.a 00:01:40.085 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.085 [69/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.085 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.085 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.085 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.085 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.085 [74/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.085 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.085 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.085 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.085 [78/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.085 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.085 [80/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.085 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.085 [82/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.085 [83/265] Linking static target lib/librte_ring.a 00:01:40.085 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.085 [85/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.085 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.085 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.085 [88/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.085 [89/265] Linking static target lib/librte_telemetry.a 00:01:40.085 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.085 [91/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.085 [92/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.085 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.085 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.085 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.085 [96/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.085 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.085 [98/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.085 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.085 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.085 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.085 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.085 [103/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.085 [104/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.085 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.085 [106/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.085 [107/265] Linking static target lib/librte_cmdline.a 00:01:40.085 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.085 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.085 [110/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.085 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.085 [112/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.085 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.085 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.085 [115/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.085 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.085 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.085 [118/265] Linking static target lib/librte_mempool.a 00:01:40.085 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.085 [120/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.085 [121/265] Linking static target lib/librte_net.a 00:01:40.373 [122/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.373 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.373 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.373 [125/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.373 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.374 [127/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.374 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.374 [129/265] Linking static target lib/librte_rcu.a 00:01:40.374 [130/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.374 [131/265] Linking static target lib/librte_timer.a 00:01:40.374 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.374 [133/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.374 [134/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.374 [135/265] Linking static target lib/librte_eal.a 00:01:40.374 [136/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.374 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.374 [138/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.374 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.374 [140/265] Linking target lib/librte_log.so.24.0 00:01:40.374 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.374 [142/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.374 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.374 [144/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.374 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.374 [146/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.374 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.374 [148/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.374 [149/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.374 [150/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:40.374 [151/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.374 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.374 [153/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.374 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.374 [155/265] Linking static target lib/librte_dmadev.a 00:01:40.374 [156/265] Linking target lib/librte_kvargs.so.24.0 00:01:40.374 [157/265] Linking static target lib/librte_compressdev.a 00:01:40.645 [158/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.645 [159/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.645 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.645 [161/265] Linking static target lib/librte_mbuf.a 00:01:40.645 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.645 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.645 [164/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.645 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.645 [166/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.645 [167/265] Linking target lib/librte_telemetry.so.24.0 00:01:40.645 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.645 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.645 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.645 [171/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.645 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.645 [173/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.645 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.645 [175/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.645 [176/265] Linking static target lib/librte_hash.a 00:01:40.645 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.645 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.645 [179/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.645 [180/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.645 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.645 [182/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.645 [183/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.645 [184/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.646 [185/265] Linking static target lib/librte_reorder.a 00:01:40.646 [186/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.646 [187/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:40.646 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.646 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.646 [190/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.646 [191/265] Linking static target lib/librte_power.a 00:01:40.646 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.646 [193/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:40.646 [194/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.646 [195/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.646 [196/265] Linking static target lib/librte_security.a 00:01:40.646 [197/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.646 [198/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.646 [199/265] Linking static target drivers/librte_mempool_ring.a 00:01:40.904 [200/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.904 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.904 [202/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.904 [203/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.904 [204/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.904 [205/265] Linking static target drivers/librte_bus_vdev.a 00:01:40.904 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.904 [207/265] Linking static target drivers/librte_bus_pci.a 00:01:40.904 [208/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.904 [209/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.904 [210/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.904 [211/265] Linking static target lib/librte_cryptodev.a 00:01:40.904 [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [213/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [214/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.162 [217/265] Linking static target lib/librte_ethdev.a 00:01:41.162 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.162 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.420 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.420 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.420 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.420 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.355 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.355 [226/265] Linking static target lib/librte_vhost.a 00:01:42.614 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.516 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.699 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.074 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.074 [231/265] Linking target lib/librte_eal.so.24.0 00:01:50.074 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:50.074 [233/265] Linking target lib/librte_ring.so.24.0 00:01:50.074 [234/265] Linking target lib/librte_meter.so.24.0 00:01:50.074 [235/265] Linking target lib/librte_timer.so.24.0 00:01:50.074 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:50.074 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:50.074 [238/265] Linking target lib/librte_pci.so.24.0 00:01:50.074 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:50.331 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:50.331 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:50.331 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:50.331 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:50.332 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:50.332 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:50.332 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:50.332 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:50.332 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:50.332 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.332 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:50.589 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:50.589 [252/265] Linking target lib/librte_net.so.24.0 00:01:50.589 [253/265] Linking target lib/librte_reorder.so.24.0 00:01:50.589 [254/265] Linking target lib/librte_compressdev.so.24.0 00:01:50.589 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:50.589 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:50.848 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:50.848 [258/265] Linking target lib/librte_security.so.24.0 00:01:50.848 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:50.848 [260/265] Linking target lib/librte_hash.so.24.0 00:01:50.848 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:50.848 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:50.848 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:50.848 [264/265] Linking target lib/librte_power.so.24.0 00:01:51.106 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:51.106 INFO: autodetecting backend as ninja 00:01:51.106 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:51.674 CC lib/ut/ut.o 00:01:51.674 CC lib/log/log.o 00:01:51.674 CC lib/log/log_flags.o 00:01:51.674 CC lib/log/log_deprecated.o 00:01:51.674 CC lib/ut_mock/mock.o 00:01:51.933 LIB libspdk_ut.a 00:01:51.933 LIB libspdk_log.a 00:01:51.933 LIB libspdk_ut_mock.a 00:01:51.933 SO libspdk_ut.so.2.0 00:01:51.933 SO libspdk_ut_mock.so.6.0 00:01:51.933 SO libspdk_log.so.7.0 00:01:51.933 SYMLINK libspdk_ut.so 00:01:51.933 SYMLINK libspdk_ut_mock.so 00:01:51.933 SYMLINK libspdk_log.so 00:01:52.191 CC lib/dma/dma.o 00:01:52.191 CXX lib/trace_parser/trace.o 00:01:52.191 CC lib/util/base64.o 00:01:52.191 CC lib/util/bit_array.o 00:01:52.191 CC lib/util/cpuset.o 00:01:52.191 CC lib/util/crc16.o 00:01:52.191 CC lib/util/crc32.o 00:01:52.191 CC lib/util/crc32_ieee.o 00:01:52.191 CC lib/util/crc32c.o 00:01:52.191 CC lib/util/crc64.o 00:01:52.191 CC lib/util/dif.o 00:01:52.191 CC lib/util/fd.o 00:01:52.191 CC lib/util/file.o 00:01:52.191 CC lib/util/hexlify.o 00:01:52.191 CC lib/util/iov.o 00:01:52.191 CC lib/util/math.o 00:01:52.191 CC lib/util/pipe.o 00:01:52.191 CC lib/util/strerror_tls.o 00:01:52.191 CC lib/util/string.o 00:01:52.191 CC lib/ioat/ioat.o 00:01:52.191 CC lib/util/uuid.o 00:01:52.191 CC lib/util/fd_group.o 00:01:52.191 CC lib/util/xor.o 00:01:52.191 CC lib/util/zipf.o 00:01:52.191 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.191 CC lib/vfio_user/host/vfio_user.o 00:01:52.450 LIB libspdk_dma.a 00:01:52.450 SO libspdk_dma.so.4.0 00:01:52.450 SYMLINK libspdk_dma.so 00:01:52.450 LIB libspdk_ioat.a 00:01:52.450 SO libspdk_ioat.so.7.0 00:01:52.450 LIB libspdk_vfio_user.a 00:01:52.450 SO libspdk_vfio_user.so.5.0 00:01:52.450 SYMLINK libspdk_ioat.so 00:01:52.709 SYMLINK libspdk_vfio_user.so 00:01:52.709 LIB libspdk_util.a 00:01:52.709 SO libspdk_util.so.9.0 00:01:52.709 SYMLINK libspdk_util.so 00:01:52.968 LIB libspdk_trace_parser.a 00:01:52.968 SO libspdk_trace_parser.so.5.0 00:01:52.968 CC lib/json/json_util.o 00:01:52.968 CC lib/json/json_parse.o 00:01:52.968 CC lib/json/json_write.o 00:01:52.968 CC lib/rdma/common.o 00:01:52.968 CC lib/rdma/rdma_verbs.o 00:01:52.968 CC lib/idxd/idxd.o 00:01:52.968 CC lib/idxd/idxd_user.o 00:01:52.968 SYMLINK libspdk_trace_parser.so 00:01:52.968 CC lib/conf/conf.o 00:01:52.968 CC lib/vmd/led.o 00:01:52.968 CC lib/vmd/vmd.o 00:01:52.968 CC lib/env_dpdk/env.o 00:01:52.968 CC lib/env_dpdk/pci.o 00:01:52.968 CC lib/env_dpdk/memory.o 00:01:52.968 CC lib/env_dpdk/init.o 00:01:52.968 CC lib/env_dpdk/threads.o 00:01:52.968 CC lib/env_dpdk/pci_virtio.o 00:01:52.968 CC lib/env_dpdk/pci_ioat.o 00:01:52.968 CC lib/env_dpdk/pci_vmd.o 00:01:52.968 CC lib/env_dpdk/pci_idxd.o 00:01:52.968 CC lib/env_dpdk/pci_event.o 00:01:52.968 CC lib/env_dpdk/sigbus_handler.o 00:01:52.968 CC lib/env_dpdk/pci_dpdk.o 00:01:52.968 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.968 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:53.227 LIB libspdk_conf.a 00:01:53.227 LIB libspdk_json.a 00:01:53.227 LIB libspdk_rdma.a 00:01:53.227 SO libspdk_conf.so.6.0 00:01:53.227 SO libspdk_json.so.6.0 00:01:53.227 SO libspdk_rdma.so.6.0 00:01:53.227 SYMLINK libspdk_conf.so 00:01:53.227 SYMLINK libspdk_json.so 00:01:53.227 SYMLINK libspdk_rdma.so 00:01:53.486 LIB libspdk_idxd.a 00:01:53.486 SO libspdk_idxd.so.12.0 00:01:53.486 LIB libspdk_vmd.a 00:01:53.486 SYMLINK libspdk_idxd.so 00:01:53.486 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.486 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.486 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.486 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.486 SO libspdk_vmd.so.6.0 00:01:53.486 SYMLINK libspdk_vmd.so 00:01:53.746 LIB libspdk_jsonrpc.a 00:01:53.746 SO libspdk_jsonrpc.so.6.0 00:01:53.746 SYMLINK libspdk_jsonrpc.so 00:01:54.005 LIB libspdk_env_dpdk.a 00:01:54.005 CC lib/rpc/rpc.o 00:01:54.005 SO libspdk_env_dpdk.so.14.0 00:01:54.264 SYMLINK libspdk_env_dpdk.so 00:01:54.264 LIB libspdk_rpc.a 00:01:54.264 SO libspdk_rpc.so.6.0 00:01:54.264 SYMLINK libspdk_rpc.so 00:01:54.523 CC lib/notify/notify.o 00:01:54.523 CC lib/notify/notify_rpc.o 00:01:54.523 CC lib/trace/trace_flags.o 00:01:54.523 CC lib/trace/trace.o 00:01:54.523 CC lib/trace/trace_rpc.o 00:01:54.523 CC lib/sock/sock.o 00:01:54.523 CC lib/sock/sock_rpc.o 00:01:54.523 LIB libspdk_notify.a 00:01:54.523 SO libspdk_notify.so.6.0 00:01:54.781 LIB libspdk_trace.a 00:01:54.781 SO libspdk_trace.so.10.0 00:01:54.781 SYMLINK libspdk_notify.so 00:01:54.781 SYMLINK libspdk_trace.so 00:01:54.781 LIB libspdk_sock.a 00:01:54.781 SO libspdk_sock.so.9.0 00:01:54.781 SYMLINK libspdk_sock.so 00:01:55.039 CC lib/thread/thread.o 00:01:55.039 CC lib/thread/iobuf.o 00:01:55.039 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.039 CC lib/nvme/nvme_ctrlr.o 00:01:55.039 CC lib/nvme/nvme_fabric.o 00:01:55.039 CC lib/nvme/nvme_ns_cmd.o 00:01:55.039 CC lib/nvme/nvme_ns.o 00:01:55.039 CC lib/nvme/nvme_pcie_common.o 00:01:55.039 CC lib/nvme/nvme_pcie.o 00:01:55.039 CC lib/nvme/nvme_quirks.o 00:01:55.039 CC lib/nvme/nvme_qpair.o 00:01:55.039 CC lib/nvme/nvme.o 00:01:55.039 CC lib/nvme/nvme_transport.o 00:01:55.039 CC lib/nvme/nvme_discovery.o 00:01:55.039 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.039 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.039 CC lib/nvme/nvme_opal.o 00:01:55.039 CC lib/nvme/nvme_tcp.o 00:01:55.039 CC lib/nvme/nvme_poll_group.o 00:01:55.039 CC lib/nvme/nvme_io_msg.o 00:01:55.039 CC lib/nvme/nvme_zns.o 00:01:55.039 CC lib/nvme/nvme_cuse.o 00:01:55.039 CC lib/nvme/nvme_vfio_user.o 00:01:55.039 CC lib/nvme/nvme_rdma.o 00:01:55.974 LIB libspdk_thread.a 00:01:55.974 SO libspdk_thread.so.10.0 00:01:56.275 SYMLINK libspdk_thread.so 00:01:56.275 CC lib/virtio/virtio.o 00:01:56.275 CC lib/blob/blobstore.o 00:01:56.275 CC lib/virtio/virtio_vhost_user.o 00:01:56.275 CC lib/blob/request.o 00:01:56.275 CC lib/blob/zeroes.o 00:01:56.275 CC lib/virtio/virtio_vfio_user.o 00:01:56.275 CC lib/virtio/virtio_pci.o 00:01:56.275 CC lib/blob/blob_bs_dev.o 00:01:56.275 CC lib/accel/accel.o 00:01:56.275 CC lib/accel/accel_sw.o 00:01:56.275 CC lib/accel/accel_rpc.o 00:01:56.275 CC lib/init/json_config.o 00:01:56.275 CC lib/init/subsystem.o 00:01:56.275 CC lib/init/subsystem_rpc.o 00:01:56.275 CC lib/init/rpc.o 00:01:56.556 LIB libspdk_init.a 00:01:56.556 SO libspdk_init.so.5.0 00:01:56.556 LIB libspdk_virtio.a 00:01:56.556 SO libspdk_virtio.so.7.0 00:01:56.556 LIB libspdk_nvme.a 00:01:56.556 SYMLINK libspdk_init.so 00:01:56.556 SYMLINK libspdk_virtio.so 00:01:56.815 SO libspdk_nvme.so.13.0 00:01:56.815 CC lib/event/reactor.o 00:01:56.815 CC lib/event/app.o 00:01:56.815 CC lib/event/app_rpc.o 00:01:56.815 CC lib/event/log_rpc.o 00:01:56.815 CC lib/event/scheduler_static.o 00:01:56.815 SYMLINK libspdk_nvme.so 00:01:57.074 LIB libspdk_accel.a 00:01:57.074 LIB libspdk_event.a 00:01:57.074 SO libspdk_accel.so.15.0 00:01:57.074 SO libspdk_event.so.13.0 00:01:57.074 SYMLINK libspdk_accel.so 00:01:57.074 SYMLINK libspdk_event.so 00:01:57.332 CC lib/bdev/bdev.o 00:01:57.332 CC lib/bdev/bdev_rpc.o 00:01:57.332 CC lib/bdev/part.o 00:01:57.332 CC lib/bdev/bdev_zone.o 00:01:57.332 CC lib/bdev/scsi_nvme.o 00:01:58.266 LIB libspdk_blob.a 00:01:58.266 SO libspdk_blob.so.11.0 00:01:58.266 SYMLINK libspdk_blob.so 00:01:58.524 CC lib/lvol/lvol.o 00:01:58.524 CC lib/blobfs/blobfs.o 00:01:58.524 CC lib/blobfs/tree.o 00:01:59.090 LIB libspdk_lvol.a 00:01:59.090 LIB libspdk_blobfs.a 00:01:59.090 SO libspdk_lvol.so.10.0 00:01:59.090 LIB libspdk_bdev.a 00:01:59.090 SO libspdk_blobfs.so.10.0 00:01:59.090 SO libspdk_bdev.so.15.0 00:01:59.090 SYMLINK libspdk_lvol.so 00:01:59.090 SYMLINK libspdk_blobfs.so 00:01:59.090 SYMLINK libspdk_bdev.so 00:01:59.348 CC lib/nbd/nbd.o 00:01:59.348 CC lib/nbd/nbd_rpc.o 00:01:59.348 CC lib/ublk/ublk_rpc.o 00:01:59.348 CC lib/scsi/dev.o 00:01:59.348 CC lib/scsi/lun.o 00:01:59.348 CC lib/ublk/ublk.o 00:01:59.348 CC lib/scsi/scsi_bdev.o 00:01:59.348 CC lib/scsi/port.o 00:01:59.348 CC lib/scsi/scsi.o 00:01:59.348 CC lib/scsi/scsi_pr.o 00:01:59.348 CC lib/scsi/task.o 00:01:59.348 CC lib/scsi/scsi_rpc.o 00:01:59.348 CC lib/nvmf/ctrlr.o 00:01:59.348 CC lib/nvmf/ctrlr_discovery.o 00:01:59.348 CC lib/nvmf/ctrlr_bdev.o 00:01:59.348 CC lib/nvmf/subsystem.o 00:01:59.348 CC lib/nvmf/nvmf.o 00:01:59.348 CC lib/ftl/ftl_core.o 00:01:59.348 CC lib/nvmf/nvmf_rpc.o 00:01:59.348 CC lib/ftl/ftl_init.o 00:01:59.348 CC lib/ftl/ftl_layout.o 00:01:59.348 CC lib/nvmf/transport.o 00:01:59.348 CC lib/nvmf/rdma.o 00:01:59.348 CC lib/nvmf/tcp.o 00:01:59.348 CC lib/ftl/ftl_debug.o 00:01:59.348 CC lib/ftl/ftl_io.o 00:01:59.348 CC lib/ftl/ftl_sb.o 00:01:59.348 CC lib/ftl/ftl_l2p.o 00:01:59.348 CC lib/ftl/ftl_l2p_flat.o 00:01:59.349 CC lib/ftl/ftl_nv_cache.o 00:01:59.349 CC lib/ftl/ftl_band.o 00:01:59.349 CC lib/ftl/ftl_band_ops.o 00:01:59.349 CC lib/ftl/ftl_writer.o 00:01:59.349 CC lib/ftl/ftl_rq.o 00:01:59.349 CC lib/ftl/ftl_reloc.o 00:01:59.349 CC lib/ftl/ftl_l2p_cache.o 00:01:59.349 CC lib/ftl/ftl_trace.o 00:01:59.349 CC lib/ftl/ftl_p2l.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.349 CC lib/ftl/utils/ftl_conf.o 00:01:59.349 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.349 CC lib/ftl/utils/ftl_md.o 00:01:59.349 CC lib/ftl/utils/ftl_mempool.o 00:01:59.349 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.349 CC lib/ftl/utils/ftl_property.o 00:01:59.349 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.349 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.349 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.349 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.349 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.349 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.349 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.349 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.349 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.349 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.349 CC lib/ftl/base/ftl_base_dev.o 00:01:59.349 CC lib/ftl/base/ftl_base_bdev.o 00:01:59.915 LIB libspdk_nbd.a 00:01:59.915 SO libspdk_nbd.so.7.0 00:01:59.915 LIB libspdk_scsi.a 00:01:59.915 SYMLINK libspdk_nbd.so 00:01:59.915 SO libspdk_scsi.so.9.0 00:02:00.173 LIB libspdk_ublk.a 00:02:00.173 SYMLINK libspdk_scsi.so 00:02:00.173 SO libspdk_ublk.so.3.0 00:02:00.173 SYMLINK libspdk_ublk.so 00:02:00.173 LIB libspdk_ftl.a 00:02:00.173 CC lib/vhost/vhost_scsi.o 00:02:00.173 CC lib/iscsi/init_grp.o 00:02:00.173 CC lib/vhost/vhost.o 00:02:00.173 CC lib/iscsi/conn.o 00:02:00.173 CC lib/vhost/vhost_rpc.o 00:02:00.173 CC lib/iscsi/iscsi.o 00:02:00.173 CC lib/iscsi/param.o 00:02:00.173 CC lib/vhost/vhost_blk.o 00:02:00.173 CC lib/iscsi/md5.o 00:02:00.173 CC lib/vhost/rte_vhost_user.o 00:02:00.173 CC lib/iscsi/portal_grp.o 00:02:00.173 CC lib/iscsi/iscsi_subsystem.o 00:02:00.173 CC lib/iscsi/tgt_node.o 00:02:00.173 CC lib/iscsi/iscsi_rpc.o 00:02:00.173 CC lib/iscsi/task.o 00:02:00.431 SO libspdk_ftl.so.9.0 00:02:00.689 SYMLINK libspdk_ftl.so 00:02:00.947 LIB libspdk_nvmf.a 00:02:00.947 SO libspdk_nvmf.so.18.0 00:02:00.947 LIB libspdk_vhost.a 00:02:01.204 SO libspdk_vhost.so.8.0 00:02:01.204 SYMLINK libspdk_nvmf.so 00:02:01.204 SYMLINK libspdk_vhost.so 00:02:01.204 LIB libspdk_iscsi.a 00:02:01.204 SO libspdk_iscsi.so.8.0 00:02:01.462 SYMLINK libspdk_iscsi.so 00:02:01.720 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.720 CC module/blob/bdev/blob_bdev.o 00:02:01.720 CC module/accel/ioat/accel_ioat.o 00:02:01.720 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.720 CC module/sock/posix/posix.o 00:02:01.720 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.720 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.720 CC module/accel/dsa/accel_dsa.o 00:02:01.720 CC module/accel/dsa/accel_dsa_rpc.o 00:02:01.720 CC module/accel/error/accel_error.o 00:02:01.720 CC module/accel/error/accel_error_rpc.o 00:02:01.720 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.720 CC module/accel/iaa/accel_iaa.o 00:02:01.720 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.978 LIB libspdk_env_dpdk_rpc.a 00:02:01.978 SO libspdk_env_dpdk_rpc.so.6.0 00:02:01.978 LIB libspdk_scheduler_gscheduler.a 00:02:01.978 LIB libspdk_accel_ioat.a 00:02:01.978 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.978 SYMLINK libspdk_env_dpdk_rpc.so 00:02:01.978 SO libspdk_scheduler_gscheduler.so.4.0 00:02:01.978 LIB libspdk_accel_error.a 00:02:01.978 SO libspdk_accel_ioat.so.6.0 00:02:01.978 LIB libspdk_scheduler_dynamic.a 00:02:01.978 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:01.978 SO libspdk_accel_error.so.2.0 00:02:01.978 LIB libspdk_blob_bdev.a 00:02:01.978 LIB libspdk_accel_iaa.a 00:02:01.978 SO libspdk_scheduler_dynamic.so.4.0 00:02:01.978 LIB libspdk_accel_dsa.a 00:02:01.978 SO libspdk_blob_bdev.so.11.0 00:02:01.978 SYMLINK libspdk_scheduler_gscheduler.so 00:02:01.978 SYMLINK libspdk_accel_ioat.so 00:02:01.978 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:01.978 SO libspdk_accel_iaa.so.3.0 00:02:01.978 SO libspdk_accel_dsa.so.5.0 00:02:01.978 SYMLINK libspdk_accel_error.so 00:02:01.978 SYMLINK libspdk_blob_bdev.so 00:02:01.978 SYMLINK libspdk_scheduler_dynamic.so 00:02:02.236 SYMLINK libspdk_accel_iaa.so 00:02:02.236 SYMLINK libspdk_accel_dsa.so 00:02:02.236 CC module/bdev/error/vbdev_error_rpc.o 00:02:02.236 CC module/bdev/error/vbdev_error.o 00:02:02.236 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:02.236 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:02.236 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:02.236 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:02.236 CC module/bdev/null/bdev_null.o 00:02:02.236 LIB libspdk_sock_posix.a 00:02:02.236 CC module/bdev/null/bdev_null_rpc.o 00:02:02.236 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:02.236 CC module/bdev/delay/vbdev_delay.o 00:02:02.496 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:02.496 CC module/bdev/raid/bdev_raid.o 00:02:02.496 CC module/bdev/raid/bdev_raid_rpc.o 00:02:02.496 CC module/bdev/gpt/gpt.o 00:02:02.496 CC module/bdev/gpt/vbdev_gpt.o 00:02:02.496 CC module/bdev/raid/raid0.o 00:02:02.496 CC module/bdev/raid/bdev_raid_sb.o 00:02:02.496 CC module/bdev/raid/raid1.o 00:02:02.496 CC module/bdev/raid/concat.o 00:02:02.496 CC module/bdev/ftl/bdev_ftl.o 00:02:02.496 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:02.496 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:02.496 CC module/bdev/lvol/vbdev_lvol.o 00:02:02.496 CC module/blobfs/bdev/blobfs_bdev.o 00:02:02.496 CC module/bdev/passthru/vbdev_passthru.o 00:02:02.496 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:02.496 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:02.496 CC module/bdev/aio/bdev_aio.o 00:02:02.496 CC module/bdev/malloc/bdev_malloc.o 00:02:02.496 CC module/bdev/iscsi/bdev_iscsi.o 00:02:02.496 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:02.496 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:02.496 CC module/bdev/aio/bdev_aio_rpc.o 00:02:02.496 CC module/bdev/nvme/bdev_nvme.o 00:02:02.496 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:02.496 CC module/bdev/nvme/nvme_rpc.o 00:02:02.496 CC module/bdev/split/vbdev_split.o 00:02:02.496 CC module/bdev/nvme/bdev_mdns_client.o 00:02:02.496 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.496 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.496 CC module/bdev/nvme/vbdev_opal.o 00:02:02.496 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:02.496 SO libspdk_sock_posix.so.6.0 00:02:02.496 SYMLINK libspdk_sock_posix.so 00:02:02.496 LIB libspdk_blobfs_bdev.a 00:02:02.496 SO libspdk_blobfs_bdev.so.6.0 00:02:02.496 LIB libspdk_bdev_null.a 00:02:02.496 LIB libspdk_bdev_error.a 00:02:02.755 LIB libspdk_bdev_split.a 00:02:02.755 SO libspdk_bdev_null.so.6.0 00:02:02.755 LIB libspdk_bdev_gpt.a 00:02:02.755 SO libspdk_bdev_error.so.6.0 00:02:02.755 SO libspdk_bdev_split.so.6.0 00:02:02.755 SYMLINK libspdk_blobfs_bdev.so 00:02:02.755 LIB libspdk_bdev_ftl.a 00:02:02.755 LIB libspdk_bdev_zone_block.a 00:02:02.755 SO libspdk_bdev_gpt.so.6.0 00:02:02.755 LIB libspdk_bdev_passthru.a 00:02:02.755 LIB libspdk_bdev_aio.a 00:02:02.755 SYMLINK libspdk_bdev_error.so 00:02:02.755 SYMLINK libspdk_bdev_null.so 00:02:02.755 SO libspdk_bdev_ftl.so.6.0 00:02:02.755 SO libspdk_bdev_zone_block.so.6.0 00:02:02.755 LIB libspdk_bdev_malloc.a 00:02:02.755 SYMLINK libspdk_bdev_split.so 00:02:02.755 SO libspdk_bdev_passthru.so.6.0 00:02:02.755 SO libspdk_bdev_aio.so.6.0 00:02:02.755 LIB libspdk_bdev_iscsi.a 00:02:02.755 LIB libspdk_bdev_delay.a 00:02:02.755 SYMLINK libspdk_bdev_gpt.so 00:02:02.755 SO libspdk_bdev_malloc.so.6.0 00:02:02.755 SO libspdk_bdev_delay.so.6.0 00:02:02.755 SYMLINK libspdk_bdev_zone_block.so 00:02:02.755 SYMLINK libspdk_bdev_ftl.so 00:02:02.755 SO libspdk_bdev_iscsi.so.6.0 00:02:02.755 SYMLINK libspdk_bdev_passthru.so 00:02:02.755 SYMLINK libspdk_bdev_aio.so 00:02:02.755 SYMLINK libspdk_bdev_delay.so 00:02:02.755 SYMLINK libspdk_bdev_malloc.so 00:02:02.755 LIB libspdk_bdev_virtio.a 00:02:02.755 SYMLINK libspdk_bdev_iscsi.so 00:02:02.755 LIB libspdk_bdev_lvol.a 00:02:02.755 SO libspdk_bdev_virtio.so.6.0 00:02:03.013 SO libspdk_bdev_lvol.so.6.0 00:02:03.013 SYMLINK libspdk_bdev_virtio.so 00:02:03.013 SYMLINK libspdk_bdev_lvol.so 00:02:03.013 LIB libspdk_bdev_raid.a 00:02:03.013 SO libspdk_bdev_raid.so.6.0 00:02:03.272 SYMLINK libspdk_bdev_raid.so 00:02:03.840 LIB libspdk_bdev_nvme.a 00:02:03.840 SO libspdk_bdev_nvme.so.7.0 00:02:04.099 SYMLINK libspdk_bdev_nvme.so 00:02:04.358 CC module/event/subsystems/sock/sock.o 00:02:04.358 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.358 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.358 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.358 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.358 CC module/event/subsystems/vmd/vmd.o 00:02:04.358 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.617 LIB libspdk_event_sock.a 00:02:04.617 SO libspdk_event_sock.so.5.0 00:02:04.617 LIB libspdk_event_vhost_blk.a 00:02:04.617 LIB libspdk_event_scheduler.a 00:02:04.617 LIB libspdk_event_vmd.a 00:02:04.617 SO libspdk_event_vhost_blk.so.3.0 00:02:04.618 LIB libspdk_event_iobuf.a 00:02:04.618 SO libspdk_event_scheduler.so.4.0 00:02:04.618 SO libspdk_event_vmd.so.6.0 00:02:04.618 SYMLINK libspdk_event_sock.so 00:02:04.618 SO libspdk_event_iobuf.so.3.0 00:02:04.618 SYMLINK libspdk_event_vhost_blk.so 00:02:04.618 SYMLINK libspdk_event_scheduler.so 00:02:04.618 SYMLINK libspdk_event_vmd.so 00:02:04.618 SYMLINK libspdk_event_iobuf.so 00:02:04.877 CC module/event/subsystems/accel/accel.o 00:02:05.136 LIB libspdk_event_accel.a 00:02:05.136 SO libspdk_event_accel.so.6.0 00:02:05.136 SYMLINK libspdk_event_accel.so 00:02:05.395 CC module/event/subsystems/bdev/bdev.o 00:02:05.395 LIB libspdk_event_bdev.a 00:02:05.395 SO libspdk_event_bdev.so.6.0 00:02:05.654 SYMLINK libspdk_event_bdev.so 00:02:05.654 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.654 CC module/event/subsystems/nbd/nbd.o 00:02:05.654 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.654 CC module/event/subsystems/ublk/ublk.o 00:02:05.654 CC module/event/subsystems/scsi/scsi.o 00:02:05.913 LIB libspdk_event_nbd.a 00:02:05.913 SO libspdk_event_nbd.so.6.0 00:02:05.913 LIB libspdk_event_ublk.a 00:02:05.913 LIB libspdk_event_scsi.a 00:02:05.913 LIB libspdk_event_nvmf.a 00:02:05.913 SO libspdk_event_ublk.so.3.0 00:02:05.913 SO libspdk_event_scsi.so.6.0 00:02:05.913 SYMLINK libspdk_event_nbd.so 00:02:05.913 SO libspdk_event_nvmf.so.6.0 00:02:05.913 SYMLINK libspdk_event_ublk.so 00:02:05.913 SYMLINK libspdk_event_scsi.so 00:02:05.913 SYMLINK libspdk_event_nvmf.so 00:02:06.171 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.171 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.430 LIB libspdk_event_iscsi.a 00:02:06.430 LIB libspdk_event_vhost_scsi.a 00:02:06.430 SO libspdk_event_iscsi.so.6.0 00:02:06.430 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.430 SYMLINK libspdk_event_iscsi.so 00:02:06.430 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.430 SO libspdk.so.6.0 00:02:06.430 SYMLINK libspdk.so 00:02:06.688 TEST_HEADER include/spdk/accel.h 00:02:06.688 TEST_HEADER include/spdk/barrier.h 00:02:06.688 TEST_HEADER include/spdk/accel_module.h 00:02:06.689 TEST_HEADER include/spdk/base64.h 00:02:06.689 TEST_HEADER include/spdk/assert.h 00:02:06.689 TEST_HEADER include/spdk/bdev.h 00:02:06.689 TEST_HEADER include/spdk/bdev_module.h 00:02:06.689 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.689 TEST_HEADER include/spdk/bit_array.h 00:02:06.689 CC app/spdk_nvme_perf/perf.o 00:02:06.689 TEST_HEADER include/spdk/bit_pool.h 00:02:06.689 CC app/spdk_nvme_identify/identify.o 00:02:06.689 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.689 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.689 TEST_HEADER include/spdk/blobfs.h 00:02:06.689 CC app/spdk_lspci/spdk_lspci.o 00:02:06.689 CC test/rpc_client/rpc_client_test.o 00:02:06.689 TEST_HEADER include/spdk/conf.h 00:02:06.689 TEST_HEADER include/spdk/blob.h 00:02:06.689 TEST_HEADER include/spdk/config.h 00:02:06.689 CC app/trace_record/trace_record.o 00:02:06.689 TEST_HEADER include/spdk/cpuset.h 00:02:06.689 TEST_HEADER include/spdk/crc16.h 00:02:06.689 TEST_HEADER include/spdk/crc32.h 00:02:06.689 TEST_HEADER include/spdk/crc64.h 00:02:06.689 CXX app/trace/trace.o 00:02:06.689 TEST_HEADER include/spdk/dif.h 00:02:06.689 TEST_HEADER include/spdk/endian.h 00:02:06.689 TEST_HEADER include/spdk/dma.h 00:02:06.689 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.689 TEST_HEADER include/spdk/env.h 00:02:06.689 TEST_HEADER include/spdk/event.h 00:02:06.689 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.689 TEST_HEADER include/spdk/fd_group.h 00:02:06.689 TEST_HEADER include/spdk/fd.h 00:02:06.689 CC app/spdk_top/spdk_top.o 00:02:06.689 TEST_HEADER include/spdk/file.h 00:02:06.689 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.689 TEST_HEADER include/spdk/ftl.h 00:02:06.689 TEST_HEADER include/spdk/hexlify.h 00:02:06.689 TEST_HEADER include/spdk/histogram_data.h 00:02:06.689 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.689 TEST_HEADER include/spdk/idxd.h 00:02:06.689 TEST_HEADER include/spdk/init.h 00:02:06.689 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.689 TEST_HEADER include/spdk/ioat.h 00:02:06.689 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.689 TEST_HEADER include/spdk/json.h 00:02:06.689 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.689 TEST_HEADER include/spdk/likely.h 00:02:06.689 TEST_HEADER include/spdk/log.h 00:02:06.953 TEST_HEADER include/spdk/memory.h 00:02:06.953 TEST_HEADER include/spdk/lvol.h 00:02:06.953 TEST_HEADER include/spdk/mmio.h 00:02:06.953 TEST_HEADER include/spdk/notify.h 00:02:06.953 TEST_HEADER include/spdk/nbd.h 00:02:06.953 TEST_HEADER include/spdk/nvme.h 00:02:06.953 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.953 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.953 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.953 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.953 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.953 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.953 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.953 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.953 TEST_HEADER include/spdk/nvmf.h 00:02:06.953 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.953 CC app/spdk_dd/spdk_dd.o 00:02:06.953 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.953 TEST_HEADER include/spdk/opal_spec.h 00:02:06.953 TEST_HEADER include/spdk/opal.h 00:02:06.953 TEST_HEADER include/spdk/pci_ids.h 00:02:06.953 TEST_HEADER include/spdk/pipe.h 00:02:06.953 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.953 TEST_HEADER include/spdk/queue.h 00:02:06.953 TEST_HEADER include/spdk/reduce.h 00:02:06.953 TEST_HEADER include/spdk/rpc.h 00:02:06.953 TEST_HEADER include/spdk/scheduler.h 00:02:06.953 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.953 TEST_HEADER include/spdk/scsi.h 00:02:06.953 CC app/nvmf_tgt/nvmf_main.o 00:02:06.953 TEST_HEADER include/spdk/stdinc.h 00:02:06.953 TEST_HEADER include/spdk/sock.h 00:02:06.953 TEST_HEADER include/spdk/thread.h 00:02:06.953 TEST_HEADER include/spdk/string.h 00:02:06.953 TEST_HEADER include/spdk/trace_parser.h 00:02:06.953 TEST_HEADER include/spdk/trace.h 00:02:06.953 TEST_HEADER include/spdk/tree.h 00:02:06.953 TEST_HEADER include/spdk/ublk.h 00:02:06.953 TEST_HEADER include/spdk/util.h 00:02:06.953 TEST_HEADER include/spdk/uuid.h 00:02:06.953 TEST_HEADER include/spdk/version.h 00:02:06.953 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.953 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.953 TEST_HEADER include/spdk/vhost.h 00:02:06.953 TEST_HEADER include/spdk/vmd.h 00:02:06.953 TEST_HEADER include/spdk/zipf.h 00:02:06.953 TEST_HEADER include/spdk/xor.h 00:02:06.953 CXX test/cpp_headers/accel_module.o 00:02:06.953 CXX test/cpp_headers/accel.o 00:02:06.953 CXX test/cpp_headers/barrier.o 00:02:06.953 CXX test/cpp_headers/assert.o 00:02:06.953 CXX test/cpp_headers/base64.o 00:02:06.953 CXX test/cpp_headers/bdev.o 00:02:06.953 CXX test/cpp_headers/bdev_module.o 00:02:06.953 CXX test/cpp_headers/bit_array.o 00:02:06.953 CXX test/cpp_headers/bdev_zone.o 00:02:06.953 CXX test/cpp_headers/bit_pool.o 00:02:06.953 CXX test/cpp_headers/blob_bdev.o 00:02:06.953 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.953 CXX test/cpp_headers/blob.o 00:02:06.953 CC app/vhost/vhost.o 00:02:06.953 CXX test/cpp_headers/blobfs.o 00:02:06.953 CXX test/cpp_headers/config.o 00:02:06.953 CXX test/cpp_headers/conf.o 00:02:06.953 CXX test/cpp_headers/cpuset.o 00:02:06.953 CXX test/cpp_headers/crc32.o 00:02:06.953 CXX test/cpp_headers/crc16.o 00:02:06.953 CC test/app/jsoncat/jsoncat.o 00:02:06.953 CC app/spdk_tgt/spdk_tgt.o 00:02:06.953 CC test/thread/poller_perf/poller_perf.o 00:02:06.953 CXX test/cpp_headers/dif.o 00:02:06.953 CXX test/cpp_headers/crc64.o 00:02:06.953 CC test/app/histogram_perf/histogram_perf.o 00:02:06.953 CC test/event/event_perf/event_perf.o 00:02:06.953 CC test/event/reactor_perf/reactor_perf.o 00:02:06.953 CC test/app/stub/stub.o 00:02:06.953 CC examples/nvme/reconnect/reconnect.o 00:02:06.953 CC test/nvme/simple_copy/simple_copy.o 00:02:06.953 CC test/nvme/reset/reset.o 00:02:06.953 CC examples/nvme/abort/abort.o 00:02:06.953 CC examples/nvme/hotplug/hotplug.o 00:02:06.953 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.953 CC test/env/memory/memory_ut.o 00:02:06.953 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.953 CC test/event/reactor/reactor.o 00:02:06.953 CC examples/accel/perf/accel_perf.o 00:02:06.954 CC examples/nvme/hello_world/hello_world.o 00:02:06.954 CC examples/vmd/led/led.o 00:02:06.954 CC test/env/vtophys/vtophys.o 00:02:06.954 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.954 CC examples/sock/hello_world/hello_sock.o 00:02:06.954 CC test/nvme/e2edp/nvme_dp.o 00:02:06.954 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.954 CC test/nvme/reserve/reserve.o 00:02:06.954 CC test/event/app_repeat/app_repeat.o 00:02:06.954 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:06.954 CC test/nvme/aer/aer.o 00:02:06.954 CC test/app/bdev_svc/bdev_svc.o 00:02:06.954 CC examples/ioat/verify/verify.o 00:02:06.954 CC test/nvme/sgl/sgl.o 00:02:06.954 CC test/nvme/connect_stress/connect_stress.o 00:02:06.954 CC test/nvme/boot_partition/boot_partition.o 00:02:06.954 CC examples/nvme/arbitration/arbitration.o 00:02:06.954 CC test/bdev/bdevio/bdevio.o 00:02:06.954 CXX test/cpp_headers/dma.o 00:02:06.954 CC examples/ioat/perf/perf.o 00:02:06.954 CC test/nvme/fdp/fdp.o 00:02:06.954 CC test/nvme/startup/startup.o 00:02:06.954 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.954 CC test/nvme/compliance/nvme_compliance.o 00:02:06.954 CC examples/util/zipf/zipf.o 00:02:06.954 CC app/fio/nvme/fio_plugin.o 00:02:06.954 CC test/nvme/cuse/cuse.o 00:02:06.954 CC test/nvme/err_injection/err_injection.o 00:02:06.954 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.954 CC examples/idxd/perf/perf.o 00:02:06.954 CC test/nvme/overhead/overhead.o 00:02:06.954 CC app/fio/bdev/fio_plugin.o 00:02:06.954 CC test/env/pci/pci_ut.o 00:02:06.954 CC test/dma/test_dma/test_dma.o 00:02:06.954 CC test/accel/dif/dif.o 00:02:06.954 CC test/event/scheduler/scheduler.o 00:02:06.954 CC examples/thread/thread/thread_ex.o 00:02:06.954 CC test/blobfs/mkfs/mkfs.o 00:02:06.954 CC examples/blob/cli/blobcli.o 00:02:06.954 CC examples/bdev/hello_world/hello_bdev.o 00:02:06.954 CC examples/bdev/bdevperf/bdevperf.o 00:02:06.954 CC examples/blob/hello_world/hello_blob.o 00:02:06.954 CC examples/nvmf/nvmf/nvmf.o 00:02:06.954 CC test/lvol/esnap/esnap.o 00:02:07.227 LINK spdk_lspci 00:02:07.227 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.227 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.227 LINK rpc_client_test 00:02:07.227 LINK spdk_nvme_discover 00:02:07.227 LINK interrupt_tgt 00:02:07.227 LINK histogram_perf 00:02:07.227 LINK lsvmd 00:02:07.227 LINK reactor 00:02:07.227 LINK vhost 00:02:07.227 LINK led 00:02:07.227 LINK vtophys 00:02:07.227 LINK jsoncat 00:02:07.227 LINK iscsi_tgt 00:02:07.227 LINK reactor_perf 00:02:07.227 LINK app_repeat 00:02:07.227 LINK spdk_tgt 00:02:07.227 LINK nvmf_tgt 00:02:07.227 LINK poller_perf 00:02:07.227 LINK bdev_svc 00:02:07.495 LINK boot_partition 00:02:07.495 LINK event_perf 00:02:07.495 LINK cmb_copy 00:02:07.495 LINK connect_stress 00:02:07.495 CXX test/cpp_headers/endian.o 00:02:07.495 LINK spdk_trace_record 00:02:07.495 LINK stub 00:02:07.495 CXX test/cpp_headers/env_dpdk.o 00:02:07.495 CXX test/cpp_headers/env.o 00:02:07.495 LINK pmr_persistence 00:02:07.495 LINK env_dpdk_post_init 00:02:07.495 LINK zipf 00:02:07.495 LINK simple_copy 00:02:07.495 LINK hotplug 00:02:07.495 LINK err_injection 00:02:07.495 LINK doorbell_aers 00:02:07.495 LINK reserve 00:02:07.495 CXX test/cpp_headers/event.o 00:02:07.495 LINK startup 00:02:07.495 LINK fused_ordering 00:02:07.495 CXX test/cpp_headers/fd_group.o 00:02:07.495 LINK mkfs 00:02:07.495 LINK scheduler 00:02:07.495 CXX test/cpp_headers/fd.o 00:02:07.495 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.495 CXX test/cpp_headers/file.o 00:02:07.495 CXX test/cpp_headers/ftl.o 00:02:07.495 LINK verify 00:02:07.495 CXX test/cpp_headers/gpt_spec.o 00:02:07.495 CXX test/cpp_headers/hexlify.o 00:02:07.495 CXX test/cpp_headers/histogram_data.o 00:02:07.495 LINK nvme_dp 00:02:07.495 LINK hello_world 00:02:07.495 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.495 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.495 CXX test/cpp_headers/idxd.o 00:02:07.495 CXX test/cpp_headers/idxd_spec.o 00:02:07.495 CXX test/cpp_headers/init.o 00:02:07.495 LINK ioat_perf 00:02:07.495 LINK hello_blob 00:02:07.495 CXX test/cpp_headers/ioat.o 00:02:07.495 CXX test/cpp_headers/ioat_spec.o 00:02:07.495 LINK hello_sock 00:02:07.495 LINK sgl 00:02:07.495 CXX test/cpp_headers/iscsi_spec.o 00:02:07.495 LINK aer 00:02:07.495 LINK hello_bdev 00:02:07.495 CXX test/cpp_headers/json.o 00:02:07.495 LINK thread 00:02:07.495 LINK reset 00:02:07.770 LINK overhead 00:02:07.770 CXX test/cpp_headers/jsonrpc.o 00:02:07.770 LINK nvme_compliance 00:02:07.770 CXX test/cpp_headers/likely.o 00:02:07.770 CXX test/cpp_headers/log.o 00:02:07.770 LINK spdk_dd 00:02:07.770 CXX test/cpp_headers/lvol.o 00:02:07.770 LINK fdp 00:02:07.770 CXX test/cpp_headers/memory.o 00:02:07.770 LINK arbitration 00:02:07.770 CXX test/cpp_headers/mmio.o 00:02:07.770 CXX test/cpp_headers/nbd.o 00:02:07.770 LINK reconnect 00:02:07.770 CXX test/cpp_headers/notify.o 00:02:07.770 LINK nvmf 00:02:07.770 CXX test/cpp_headers/nvme.o 00:02:07.770 LINK test_dma 00:02:07.770 CXX test/cpp_headers/nvme_intel.o 00:02:07.770 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.770 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.770 CXX test/cpp_headers/nvme_spec.o 00:02:07.770 CXX test/cpp_headers/nvme_zns.o 00:02:07.770 LINK dif 00:02:07.770 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.770 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.770 LINK abort 00:02:07.770 CXX test/cpp_headers/nvmf.o 00:02:07.770 CXX test/cpp_headers/nvmf_spec.o 00:02:07.770 LINK idxd_perf 00:02:07.770 CXX test/cpp_headers/nvmf_transport.o 00:02:07.770 LINK bdevio 00:02:07.770 CXX test/cpp_headers/opal.o 00:02:07.770 CXX test/cpp_headers/opal_spec.o 00:02:07.770 CXX test/cpp_headers/pci_ids.o 00:02:07.770 CXX test/cpp_headers/pipe.o 00:02:07.770 CXX test/cpp_headers/queue.o 00:02:07.770 CXX test/cpp_headers/reduce.o 00:02:07.770 CXX test/cpp_headers/rpc.o 00:02:07.770 CXX test/cpp_headers/scheduler.o 00:02:07.770 LINK spdk_trace 00:02:07.770 CXX test/cpp_headers/scsi.o 00:02:07.770 CXX test/cpp_headers/scsi_spec.o 00:02:07.770 CXX test/cpp_headers/sock.o 00:02:07.770 CXX test/cpp_headers/stdinc.o 00:02:07.770 CXX test/cpp_headers/string.o 00:02:08.033 CXX test/cpp_headers/thread.o 00:02:08.033 CXX test/cpp_headers/trace.o 00:02:08.033 LINK nvme_fuzz 00:02:08.033 CXX test/cpp_headers/trace_parser.o 00:02:08.033 CXX test/cpp_headers/ublk.o 00:02:08.033 CXX test/cpp_headers/tree.o 00:02:08.033 CXX test/cpp_headers/util.o 00:02:08.033 CXX test/cpp_headers/uuid.o 00:02:08.033 LINK pci_ut 00:02:08.033 LINK spdk_nvme 00:02:08.033 CXX test/cpp_headers/version.o 00:02:08.033 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.033 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.033 CXX test/cpp_headers/vhost.o 00:02:08.033 LINK blobcli 00:02:08.033 CXX test/cpp_headers/vmd.o 00:02:08.033 CXX test/cpp_headers/xor.o 00:02:08.033 LINK nvme_manage 00:02:08.033 CXX test/cpp_headers/zipf.o 00:02:08.033 LINK accel_perf 00:02:08.033 LINK spdk_bdev 00:02:08.033 LINK spdk_nvme_perf 00:02:08.292 LINK vhost_fuzz 00:02:08.292 LINK mem_callbacks 00:02:08.292 LINK bdevperf 00:02:08.551 LINK spdk_nvme_identify 00:02:08.551 LINK spdk_top 00:02:08.551 LINK memory_ut 00:02:08.551 LINK cuse 00:02:09.117 LINK iscsi_fuzz 00:02:11.017 LINK esnap 00:02:11.312 00:02:11.312 real 0m40.244s 00:02:11.312 user 6m48.500s 00:02:11.312 sys 2m57.658s 00:02:11.312 08:02:44 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:11.312 08:02:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.312 ************************************ 00:02:11.312 END TEST make 00:02:11.312 ************************************ 00:02:11.312 08:02:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.312 08:02:44 -- nvmf/common.sh@7 -- # uname -s 00:02:11.312 08:02:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.312 08:02:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.312 08:02:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.312 08:02:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.312 08:02:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.312 08:02:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.312 08:02:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.312 08:02:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.312 08:02:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.312 08:02:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.312 08:02:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:11.312 08:02:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:11.312 08:02:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.312 08:02:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.312 08:02:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.312 08:02:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.312 08:02:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.312 08:02:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.312 08:02:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.312 08:02:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.312 08:02:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.312 08:02:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.312 08:02:44 -- paths/export.sh@5 -- # export PATH 00:02:11.312 08:02:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.312 08:02:44 -- nvmf/common.sh@46 -- # : 0 00:02:11.312 08:02:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:11.312 08:02:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:11.312 08:02:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:11.312 08:02:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.312 08:02:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.312 08:02:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:11.312 08:02:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:11.312 08:02:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:11.312 08:02:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.312 08:02:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.312 08:02:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.312 08:02:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.312 08:02:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.312 08:02:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.312 08:02:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.312 08:02:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.312 08:02:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:11.312 08:02:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:11.312 08:02:44 -- spdk/autotest.sh@48 -- # udevadm_pid=2018813 00:02:11.312 08:02:44 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.312 08:02:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:11.312 08:02:44 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.312 08:02:44 -- spdk/autotest.sh@54 -- # echo 2018815 00:02:11.312 08:02:44 -- spdk/autotest.sh@56 -- # echo 2018816 00:02:11.312 08:02:44 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:11.312 08:02:44 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:11.312 08:02:44 -- spdk/autotest.sh@60 -- # echo 2018817 00:02:11.312 08:02:44 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.312 08:02:44 -- spdk/autotest.sh@62 -- # echo 2018818 00:02:11.312 08:02:44 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:11.312 08:02:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:11.312 08:02:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:11.312 08:02:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:11.312 08:02:44 -- common/autotest_common.sh@10 -- # set +x 00:02:11.312 08:02:44 -- spdk/autotest.sh@70 -- # create_test_list 00:02:11.312 08:02:44 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:11.312 08:02:44 -- common/autotest_common.sh@10 -- # set +x 00:02:11.312 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:11.312 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:11.312 08:02:44 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:11.312 08:02:44 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.312 08:02:44 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.312 08:02:44 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.312 08:02:44 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.312 08:02:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:11.312 08:02:44 -- common/autotest_common.sh@1438 -- # uname 00:02:11.312 08:02:44 -- common/autotest_common.sh@1438 -- # '[' Linux = FreeBSD ']' 00:02:11.312 08:02:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:11.312 08:02:44 -- common/autotest_common.sh@1458 -- # uname 00:02:11.312 08:02:44 -- common/autotest_common.sh@1458 -- # [[ Linux = FreeBSD ]] 00:02:11.312 08:02:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:11.312 08:02:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:11.312 08:02:44 -- spdk/autotest.sh@83 -- # hash lcov 00:02:11.312 08:02:44 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:11.312 08:02:44 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:11.312 --rc lcov_branch_coverage=1 00:02:11.312 --rc lcov_function_coverage=1 00:02:11.312 --rc genhtml_branch_coverage=1 00:02:11.312 --rc genhtml_function_coverage=1 00:02:11.312 --rc genhtml_legend=1 00:02:11.312 --rc geninfo_all_blocks=1 00:02:11.312 ' 00:02:11.312 08:02:44 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:11.312 --rc lcov_branch_coverage=1 00:02:11.312 --rc lcov_function_coverage=1 00:02:11.312 --rc genhtml_branch_coverage=1 00:02:11.312 --rc genhtml_function_coverage=1 00:02:11.312 --rc genhtml_legend=1 00:02:11.312 --rc geninfo_all_blocks=1 00:02:11.312 ' 00:02:11.312 08:02:44 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:11.312 --rc lcov_branch_coverage=1 00:02:11.312 --rc lcov_function_coverage=1 00:02:11.312 --rc genhtml_branch_coverage=1 00:02:11.312 --rc genhtml_function_coverage=1 00:02:11.312 --rc genhtml_legend=1 00:02:11.312 --rc geninfo_all_blocks=1 00:02:11.312 --no-external' 00:02:11.312 08:02:44 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:11.313 --rc lcov_branch_coverage=1 00:02:11.313 --rc lcov_function_coverage=1 00:02:11.313 --rc genhtml_branch_coverage=1 00:02:11.313 --rc genhtml_function_coverage=1 00:02:11.313 --rc genhtml_legend=1 00:02:11.313 --rc geninfo_all_blocks=1 00:02:11.313 --no-external' 00:02:11.313 08:02:44 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:11.604 lcov: LCOV version 1.14 00:02:11.604 08:02:45 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:12.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:12.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:12.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:12.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:12.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:12.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:27.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:27.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:27.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:27.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:27.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:27.049 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:35.160 08:03:07 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:35.160 08:03:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:35.160 08:03:07 -- common/autotest_common.sh@10 -- # set +x 00:02:35.160 08:03:07 -- spdk/autotest.sh@102 -- # rm -f 00:02:35.160 08:03:07 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.060 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:37.060 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:37.060 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:37.060 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:37.318 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:37.318 08:03:10 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:37.318 08:03:10 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:02:37.318 08:03:10 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:02:37.318 08:03:10 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:02:37.318 08:03:10 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.318 08:03:10 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:02:37.318 08:03:10 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:02:37.318 08:03:10 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.318 08:03:10 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:02:37.318 08:03:10 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:02:37.318 08:03:10 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:37.318 08:03:10 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:02:37.318 08:03:10 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:02:37.318 08:03:10 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:02:37.318 08:03:10 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:37.318 08:03:10 -- spdk/autotest.sh@109 -- # (( 1 > 0 )) 00:02:37.318 08:03:10 -- spdk/autotest.sh@114 -- # export PCI_BLOCKED=0000:5f:00.0 00:02:37.318 08:03:10 -- spdk/autotest.sh@114 -- # PCI_BLOCKED=0000:5f:00.0 00:02:37.318 08:03:10 -- spdk/autotest.sh@115 -- # export PCI_ZONED=0000:5f:00.0 00:02:37.318 08:03:10 -- spdk/autotest.sh@115 -- # PCI_ZONED=0000:5f:00.0 00:02:37.318 08:03:10 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 00:02:37.318 08:03:10 -- spdk/autotest.sh@121 -- # grep -v p 00:02:37.318 08:03:10 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.318 08:03:10 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:37.318 08:03:10 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:37.318 08:03:10 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:37.318 08:03:10 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:37.318 No valid GPT data, bailing 00:02:37.575 08:03:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:37.575 08:03:11 -- scripts/common.sh@393 -- # pt= 00:02:37.575 08:03:11 -- scripts/common.sh@394 -- # return 1 00:02:37.576 08:03:11 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:37.576 1+0 records in 00:02:37.576 1+0 records out 00:02:37.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00209875 s, 500 MB/s 00:02:37.576 08:03:11 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.576 08:03:11 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:37.576 08:03:11 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:37.576 08:03:11 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:37.576 08:03:11 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:37.576 No valid GPT data, bailing 00:02:37.576 08:03:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:37.576 08:03:11 -- scripts/common.sh@393 -- # pt= 00:02:37.576 08:03:11 -- scripts/common.sh@394 -- # return 1 00:02:37.576 08:03:11 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:37.576 1+0 records in 00:02:37.576 1+0 records out 00:02:37.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507106 s, 207 MB/s 00:02:37.576 08:03:11 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:37.576 08:03:11 -- spdk/autotest.sh@123 -- # [[ -z 0000:5f:00.0 ]] 00:02:37.576 08:03:11 -- spdk/autotest.sh@123 -- # continue 00:02:37.576 08:03:11 -- spdk/autotest.sh@129 -- # sync 00:02:37.576 08:03:11 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:37.576 08:03:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:37.576 08:03:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:41.754 08:03:14 -- spdk/autotest.sh@135 -- # uname -s 00:02:41.754 08:03:14 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:41.754 08:03:14 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.754 08:03:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:41.754 08:03:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:41.754 08:03:14 -- common/autotest_common.sh@10 -- # set +x 00:02:41.754 ************************************ 00:02:41.754 START TEST setup.sh 00:02:41.754 ************************************ 00:02:41.754 08:03:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.754 * Looking for test storage... 00:02:41.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.754 08:03:14 -- setup/test-setup.sh@10 -- # uname -s 00:02:41.754 08:03:14 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:41.754 08:03:14 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:41.754 08:03:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:41.754 08:03:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:41.754 08:03:14 -- common/autotest_common.sh@10 -- # set +x 00:02:41.754 ************************************ 00:02:41.754 START TEST acl 00:02:41.754 ************************************ 00:02:41.754 08:03:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:41.754 * Looking for test storage... 00:02:41.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.754 08:03:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:41.754 08:03:14 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:02:41.754 08:03:14 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:02:41.754 08:03:14 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:02:41.754 08:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:41.754 08:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:02:41.754 08:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:02:41.754 08:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:41.754 08:03:14 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:41.754 08:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:41.754 08:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:02:41.754 08:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:02:41.755 08:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:41.755 08:03:14 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:02:41.755 08:03:14 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:02:41.755 08:03:14 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:02:41.755 08:03:14 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:02:41.755 08:03:14 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:41.755 08:03:14 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:02:41.755 08:03:14 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:41.755 08:03:14 -- setup/acl.sh@12 -- # devs=() 00:02:41.755 08:03:14 -- setup/acl.sh@12 -- # declare -a devs 00:02:41.755 08:03:14 -- setup/acl.sh@13 -- # drivers=() 00:02:41.755 08:03:14 -- setup/acl.sh@13 -- # declare -A drivers 00:02:41.755 08:03:14 -- setup/acl.sh@51 -- # setup reset 00:02:41.755 08:03:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.755 08:03:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.287 08:03:17 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:44.287 08:03:17 -- setup/acl.sh@16 -- # local dev driver 00:02:44.287 08:03:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:44.287 08:03:17 -- setup/acl.sh@15 -- # setup output status 00:02:44.287 08:03:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.287 08:03:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:46.861 Hugepages 00:02:46.861 node hugesize free / total 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 00:02:46.861 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.861 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:46.861 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.861 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:47.120 08:03:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@21 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:47.120 08:03:20 -- setup/acl.sh@20 -- # continue 00:02:47.120 08:03:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.120 08:03:20 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:47.120 08:03:20 -- setup/acl.sh@54 -- # run_test denied denied 00:02:47.120 08:03:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:47.120 08:03:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:47.120 08:03:20 -- common/autotest_common.sh@10 -- # set +x 00:02:47.120 ************************************ 00:02:47.120 START TEST denied 00:02:47.120 ************************************ 00:02:47.120 08:03:20 -- common/autotest_common.sh@1102 -- # denied 00:02:47.120 08:03:20 -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:02:47.120 08:03:20 -- setup/acl.sh@38 -- # setup output config 00:02:47.120 08:03:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:47.120 08:03:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.120 08:03:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:51.315 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:51.315 08:03:24 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:51.315 08:03:24 -- setup/acl.sh@28 -- # local dev driver 00:02:51.315 08:03:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:51.315 08:03:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:51.315 08:03:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:51.315 08:03:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:51.315 08:03:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:51.315 08:03:24 -- setup/acl.sh@41 -- # setup reset 00:02:51.315 08:03:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.315 08:03:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.507 00:02:55.507 real 0m7.879s 00:02:55.507 user 0m2.644s 00:02:55.507 sys 0m4.491s 00:02:55.507 08:03:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:55.507 08:03:28 -- common/autotest_common.sh@10 -- # set +x 00:02:55.507 ************************************ 00:02:55.507 END TEST denied 00:02:55.507 ************************************ 00:02:55.507 08:03:28 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:55.507 08:03:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:02:55.507 08:03:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:02:55.507 08:03:28 -- common/autotest_common.sh@10 -- # set +x 00:02:55.507 ************************************ 00:02:55.507 START TEST allowed 00:02:55.507 ************************************ 00:02:55.507 08:03:28 -- common/autotest_common.sh@1102 -- # allowed 00:02:55.507 08:03:28 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:55.507 08:03:28 -- setup/acl.sh@45 -- # setup output config 00:02:55.507 08:03:28 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:55.507 08:03:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.507 08:03:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.707 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:59.707 08:03:32 -- setup/acl.sh@47 -- # verify 00:02:59.707 08:03:32 -- setup/acl.sh@28 -- # local dev driver 00:02:59.707 08:03:32 -- setup/acl.sh@48 -- # setup reset 00:02:59.707 08:03:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.707 08:03:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.995 00:03:02.995 real 0m7.857s 00:03:02.995 user 0m2.638s 00:03:02.995 sys 0m4.392s 00:03:02.995 08:03:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:02.995 08:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:02.995 ************************************ 00:03:02.995 END TEST allowed 00:03:02.995 ************************************ 00:03:02.995 00:03:02.995 real 0m21.877s 00:03:02.995 user 0m7.535s 00:03:02.995 sys 0m12.928s 00:03:02.995 08:03:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:02.995 08:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:02.995 ************************************ 00:03:02.995 END TEST acl 00:03:02.995 ************************************ 00:03:02.995 08:03:36 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:02.995 08:03:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:02.995 08:03:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:02.995 08:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:02.995 ************************************ 00:03:02.995 START TEST hugepages 00:03:02.995 ************************************ 00:03:02.995 08:03:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:03.255 * Looking for test storage... 00:03:03.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.255 08:03:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:03.255 08:03:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:03.255 08:03:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:03.255 08:03:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:03.255 08:03:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:03.255 08:03:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:03.255 08:03:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:03.255 08:03:36 -- setup/common.sh@18 -- # local node= 00:03:03.255 08:03:36 -- setup/common.sh@19 -- # local var val 00:03:03.255 08:03:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.255 08:03:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.255 08:03:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.255 08:03:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.255 08:03:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.255 08:03:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 71395740 kB' 'MemAvailable: 76370616 kB' 'Buffers: 2696 kB' 'Cached: 14213864 kB' 'SwapCached: 0 kB' 'Active: 10072856 kB' 'Inactive: 4658400 kB' 'Active(anon): 9506632 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517996 kB' 'Mapped: 209192 kB' 'Shmem: 8991936 kB' 'KReclaimable: 530088 kB' 'Slab: 1062332 kB' 'SReclaimable: 530088 kB' 'SUnreclaim: 532244 kB' 'KernelStack: 19376 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52952952 kB' 'Committed_AS: 10884268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212760 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.255 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.255 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # continue 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 08:03:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 08:03:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:03.256 08:03:36 -- setup/common.sh@33 -- # echo 2048 00:03:03.256 08:03:36 -- setup/common.sh@33 -- # return 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:03.256 08:03:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:03.256 08:03:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:03.256 08:03:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:03.256 08:03:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:03.256 08:03:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:03.256 08:03:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:03.256 08:03:36 -- setup/hugepages.sh@207 -- # get_nodes 00:03:03.256 08:03:36 -- setup/hugepages.sh@27 -- # local node 00:03:03.256 08:03:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.256 08:03:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:03.256 08:03:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.256 08:03:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.256 08:03:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.256 08:03:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.256 08:03:36 -- setup/hugepages.sh@208 -- # clear_hp 00:03:03.256 08:03:36 -- setup/hugepages.sh@37 -- # local node hp 00:03:03.256 08:03:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.256 08:03:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.256 08:03:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.256 08:03:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:03.256 08:03:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.256 08:03:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:03.256 08:03:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:03.256 08:03:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:03.256 08:03:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:03.256 08:03:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:03.256 08:03:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:03.256 08:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:03.256 ************************************ 00:03:03.256 START TEST default_setup 00:03:03.256 ************************************ 00:03:03.256 08:03:36 -- common/autotest_common.sh@1102 -- # default_setup 00:03:03.256 08:03:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:03.256 08:03:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:03.256 08:03:36 -- setup/hugepages.sh@51 -- # shift 00:03:03.256 08:03:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:03.256 08:03:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.256 08:03:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.256 08:03:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:03.256 08:03:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:03.256 08:03:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.256 08:03:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:03.256 08:03:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.256 08:03:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.256 08:03:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.256 08:03:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:03.256 08:03:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.256 08:03:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:03.256 08:03:36 -- setup/hugepages.sh@73 -- # return 0 00:03:03.256 08:03:36 -- setup/hugepages.sh@137 -- # setup output 00:03:03.256 08:03:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.256 08:03:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.540 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:06.540 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.540 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:06.541 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.478 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.478 08:03:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:07.478 08:03:41 -- setup/hugepages.sh@89 -- # local node 00:03:07.478 08:03:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.478 08:03:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.478 08:03:41 -- setup/hugepages.sh@92 -- # local surp 00:03:07.478 08:03:41 -- setup/hugepages.sh@93 -- # local resv 00:03:07.478 08:03:41 -- setup/hugepages.sh@94 -- # local anon 00:03:07.478 08:03:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.478 08:03:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.478 08:03:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.478 08:03:41 -- setup/common.sh@18 -- # local node= 00:03:07.478 08:03:41 -- setup/common.sh@19 -- # local var val 00:03:07.478 08:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.478 08:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.478 08:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.478 08:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.478 08:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.478 08:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73544528 kB' 'MemAvailable: 78519396 kB' 'Buffers: 2696 kB' 'Cached: 14213964 kB' 'SwapCached: 0 kB' 'Active: 10084052 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517828 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529340 kB' 'Mapped: 208336 kB' 'Shmem: 8992036 kB' 'KReclaimable: 530080 kB' 'Slab: 1060784 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530704 kB' 'KernelStack: 19392 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212724 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.478 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.478 08:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.479 08:03:41 -- setup/common.sh@33 -- # echo 0 00:03:07.479 08:03:41 -- setup/common.sh@33 -- # return 0 00:03:07.479 08:03:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.479 08:03:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.479 08:03:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.479 08:03:41 -- setup/common.sh@18 -- # local node= 00:03:07.479 08:03:41 -- setup/common.sh@19 -- # local var val 00:03:07.479 08:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.479 08:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.479 08:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.479 08:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.479 08:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.479 08:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73547956 kB' 'MemAvailable: 78522824 kB' 'Buffers: 2696 kB' 'Cached: 14213968 kB' 'SwapCached: 0 kB' 'Active: 10083888 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517664 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529076 kB' 'Mapped: 208336 kB' 'Shmem: 8992040 kB' 'KReclaimable: 530080 kB' 'Slab: 1060748 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530668 kB' 'KernelStack: 19328 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212708 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.479 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.479 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.480 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.480 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.740 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.740 08:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.740 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.740 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.740 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.740 08:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.740 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.740 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.740 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.741 08:03:41 -- setup/common.sh@33 -- # echo 0 00:03:07.741 08:03:41 -- setup/common.sh@33 -- # return 0 00:03:07.741 08:03:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.741 08:03:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.741 08:03:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.741 08:03:41 -- setup/common.sh@18 -- # local node= 00:03:07.741 08:03:41 -- setup/common.sh@19 -- # local var val 00:03:07.741 08:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.741 08:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.741 08:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.741 08:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.741 08:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.741 08:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73548392 kB' 'MemAvailable: 78523260 kB' 'Buffers: 2696 kB' 'Cached: 14213984 kB' 'SwapCached: 0 kB' 'Active: 10083716 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517492 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528948 kB' 'Mapped: 208444 kB' 'Shmem: 8992056 kB' 'KReclaimable: 530080 kB' 'Slab: 1060784 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530704 kB' 'KernelStack: 19376 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212660 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.741 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.741 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.742 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.742 08:03:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.743 08:03:41 -- setup/common.sh@33 -- # echo 0 00:03:07.743 08:03:41 -- setup/common.sh@33 -- # return 0 00:03:07.743 08:03:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.743 08:03:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.743 nr_hugepages=1024 00:03:07.743 08:03:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.743 resv_hugepages=0 00:03:07.743 08:03:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.743 surplus_hugepages=0 00:03:07.743 08:03:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.743 anon_hugepages=0 00:03:07.743 08:03:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.743 08:03:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.743 08:03:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.743 08:03:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.743 08:03:41 -- setup/common.sh@18 -- # local node= 00:03:07.743 08:03:41 -- setup/common.sh@19 -- # local var val 00:03:07.743 08:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.743 08:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.743 08:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.743 08:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.743 08:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.743 08:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73548608 kB' 'MemAvailable: 78523476 kB' 'Buffers: 2696 kB' 'Cached: 14214000 kB' 'SwapCached: 0 kB' 'Active: 10083620 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517396 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528844 kB' 'Mapped: 208444 kB' 'Shmem: 8992072 kB' 'KReclaimable: 530080 kB' 'Slab: 1060784 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530704 kB' 'KernelStack: 19408 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212660 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.743 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.743 08:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.744 08:03:41 -- setup/common.sh@33 -- # echo 1024 00:03:07.744 08:03:41 -- setup/common.sh@33 -- # return 0 00:03:07.744 08:03:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.744 08:03:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.744 08:03:41 -- setup/hugepages.sh@27 -- # local node 00:03:07.744 08:03:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.744 08:03:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.744 08:03:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.744 08:03:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.744 08:03:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.744 08:03:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.744 08:03:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.744 08:03:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.744 08:03:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.744 08:03:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.744 08:03:41 -- setup/common.sh@18 -- # local node=0 00:03:07.744 08:03:41 -- setup/common.sh@19 -- # local var val 00:03:07.744 08:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.744 08:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.744 08:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.744 08:03:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.744 08:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.744 08:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24550808 kB' 'MemUsed: 8083820 kB' 'SwapCached: 0 kB' 'Active: 3233148 kB' 'Inactive: 1226712 kB' 'Active(anon): 2779424 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4113920 kB' 'Mapped: 156940 kB' 'AnonPages: 349208 kB' 'Shmem: 2433484 kB' 'KernelStack: 11016 kB' 'PageTables: 6400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 513908 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.744 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.744 08:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # continue 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.745 08:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.745 08:03:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.745 08:03:41 -- setup/common.sh@33 -- # echo 0 00:03:07.745 08:03:41 -- setup/common.sh@33 -- # return 0 00:03:07.745 08:03:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.745 08:03:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.745 08:03:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.745 08:03:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.745 08:03:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.745 node0=1024 expecting 1024 00:03:07.745 08:03:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.745 00:03:07.745 real 0m4.515s 00:03:07.745 user 0m1.469s 00:03:07.745 sys 0m2.259s 00:03:07.745 08:03:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.745 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:07.745 ************************************ 00:03:07.745 END TEST default_setup 00:03:07.745 ************************************ 00:03:07.745 08:03:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:07.745 08:03:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:07.745 08:03:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:07.745 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:07.745 ************************************ 00:03:07.745 START TEST per_node_1G_alloc 00:03:07.745 ************************************ 00:03:07.746 08:03:41 -- common/autotest_common.sh@1102 -- # per_node_1G_alloc 00:03:07.746 08:03:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:07.746 08:03:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:07.746 08:03:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:07.746 08:03:41 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:07.746 08:03:41 -- setup/hugepages.sh@51 -- # shift 00:03:07.746 08:03:41 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:07.746 08:03:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:07.746 08:03:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.746 08:03:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:07.746 08:03:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:07.746 08:03:41 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:07.746 08:03:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.746 08:03:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:07.746 08:03:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.746 08:03:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.746 08:03:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.746 08:03:41 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:07.746 08:03:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.746 08:03:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.746 08:03:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:07.746 08:03:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:07.746 08:03:41 -- setup/hugepages.sh@73 -- # return 0 00:03:07.746 08:03:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:07.746 08:03:41 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:07.746 08:03:41 -- setup/hugepages.sh@146 -- # setup output 00:03:07.746 08:03:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.746 08:03:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.035 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:11.035 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.035 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.035 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.035 08:03:44 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:11.035 08:03:44 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:11.035 08:03:44 -- setup/hugepages.sh@89 -- # local node 00:03:11.035 08:03:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.035 08:03:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.035 08:03:44 -- setup/hugepages.sh@92 -- # local surp 00:03:11.035 08:03:44 -- setup/hugepages.sh@93 -- # local resv 00:03:11.035 08:03:44 -- setup/hugepages.sh@94 -- # local anon 00:03:11.035 08:03:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.035 08:03:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.035 08:03:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.035 08:03:44 -- setup/common.sh@18 -- # local node= 00:03:11.035 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.035 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.035 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.035 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.035 08:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.035 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.035 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.035 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.035 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73552864 kB' 'MemAvailable: 78527732 kB' 'Buffers: 2696 kB' 'Cached: 14214084 kB' 'SwapCached: 0 kB' 'Active: 10084796 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518572 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529708 kB' 'Mapped: 208464 kB' 'Shmem: 8992156 kB' 'KReclaimable: 530080 kB' 'Slab: 1060204 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530124 kB' 'KernelStack: 19408 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212964 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.036 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.036 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.037 08:03:44 -- setup/common.sh@33 -- # echo 0 00:03:11.037 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.037 08:03:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.037 08:03:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.037 08:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.037 08:03:44 -- setup/common.sh@18 -- # local node= 00:03:11.037 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.037 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.037 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.037 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.037 08:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.037 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.037 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73557460 kB' 'MemAvailable: 78532328 kB' 'Buffers: 2696 kB' 'Cached: 14214092 kB' 'SwapCached: 0 kB' 'Active: 10085216 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518992 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530272 kB' 'Mapped: 208464 kB' 'Shmem: 8992164 kB' 'KReclaimable: 530080 kB' 'Slab: 1060180 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530100 kB' 'KernelStack: 19424 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10900060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212964 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.037 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.037 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.038 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.038 08:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.039 08:03:44 -- setup/common.sh@33 -- # echo 0 00:03:11.039 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.039 08:03:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.039 08:03:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.039 08:03:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.039 08:03:44 -- setup/common.sh@18 -- # local node= 00:03:11.039 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.039 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.039 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.039 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.039 08:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.039 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.039 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73557920 kB' 'MemAvailable: 78532788 kB' 'Buffers: 2696 kB' 'Cached: 14214104 kB' 'SwapCached: 0 kB' 'Active: 10084476 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518252 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529460 kB' 'Mapped: 208452 kB' 'Shmem: 8992176 kB' 'KReclaimable: 530080 kB' 'Slab: 1060204 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530124 kB' 'KernelStack: 19408 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10900072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212964 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.039 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.039 08:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.040 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.040 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.041 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.041 08:03:44 -- setup/common.sh@33 -- # echo 0 00:03:11.041 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.041 08:03:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.041 08:03:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.041 nr_hugepages=1024 00:03:11.041 08:03:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.041 resv_hugepages=0 00:03:11.041 08:03:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.041 surplus_hugepages=0 00:03:11.041 08:03:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.041 anon_hugepages=0 00:03:11.041 08:03:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.041 08:03:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.041 08:03:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.041 08:03:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.041 08:03:44 -- setup/common.sh@18 -- # local node= 00:03:11.041 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.041 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.041 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.041 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.041 08:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.041 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.041 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.041 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73558552 kB' 'MemAvailable: 78533420 kB' 'Buffers: 2696 kB' 'Cached: 14214116 kB' 'SwapCached: 0 kB' 'Active: 10085056 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518832 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530004 kB' 'Mapped: 208452 kB' 'Shmem: 8992188 kB' 'KReclaimable: 530080 kB' 'Slab: 1060204 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 530124 kB' 'KernelStack: 19424 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10900088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212964 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.042 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.042 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.043 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.043 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.044 08:03:44 -- setup/common.sh@33 -- # echo 1024 00:03:11.044 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.044 08:03:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.044 08:03:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.044 08:03:44 -- setup/hugepages.sh@27 -- # local node 00:03:11.044 08:03:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.044 08:03:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.044 08:03:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.044 08:03:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.044 08:03:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.044 08:03:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.044 08:03:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.044 08:03:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.044 08:03:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.044 08:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.044 08:03:44 -- setup/common.sh@18 -- # local node=0 00:03:11.044 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.044 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.044 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.044 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.044 08:03:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.044 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.044 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 25595256 kB' 'MemUsed: 7039372 kB' 'SwapCached: 0 kB' 'Active: 3233236 kB' 'Inactive: 1226712 kB' 'Active(anon): 2779512 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4113972 kB' 'Mapped: 156948 kB' 'AnonPages: 349140 kB' 'Shmem: 2433536 kB' 'KernelStack: 11016 kB' 'PageTables: 6408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 513328 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.044 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.044 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.045 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.045 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.045 08:03:44 -- setup/common.sh@33 -- # echo 0 00:03:11.045 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.045 08:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.045 08:03:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.045 08:03:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.045 08:03:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.045 08:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.045 08:03:44 -- setup/common.sh@18 -- # local node=1 00:03:11.045 08:03:44 -- setup/common.sh@19 -- # local var val 00:03:11.045 08:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.045 08:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.045 08:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.045 08:03:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.045 08:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.045 08:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 47963520 kB' 'MemUsed: 12724852 kB' 'SwapCached: 0 kB' 'Active: 6851152 kB' 'Inactive: 3431688 kB' 'Active(anon): 6738652 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10102856 kB' 'Mapped: 51504 kB' 'AnonPages: 180176 kB' 'Shmem: 6558668 kB' 'KernelStack: 8392 kB' 'PageTables: 2820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291164 kB' 'Slab: 546876 kB' 'SReclaimable: 291164 kB' 'SUnreclaim: 255712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.046 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.046 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.047 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.047 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.368 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.368 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.368 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.368 08:03:44 -- setup/common.sh@32 -- # continue 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.368 08:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.368 08:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.368 08:03:44 -- setup/common.sh@33 -- # echo 0 00:03:11.368 08:03:44 -- setup/common.sh@33 -- # return 0 00:03:11.368 08:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.368 08:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.368 08:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.368 08:03:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.368 node0=512 expecting 512 00:03:11.368 08:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.368 08:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.368 08:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.368 08:03:44 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.368 node1=512 expecting 512 00:03:11.368 08:03:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.368 00:03:11.368 real 0m3.401s 00:03:11.368 user 0m1.433s 00:03:11.368 sys 0m2.037s 00:03:11.368 08:03:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.368 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:11.368 ************************************ 00:03:11.368 END TEST per_node_1G_alloc 00:03:11.368 ************************************ 00:03:11.368 08:03:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:11.368 08:03:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:11.368 08:03:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:11.368 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:11.368 ************************************ 00:03:11.368 START TEST even_2G_alloc 00:03:11.368 ************************************ 00:03:11.368 08:03:44 -- common/autotest_common.sh@1102 -- # even_2G_alloc 00:03:11.368 08:03:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:11.368 08:03:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.368 08:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.368 08:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.368 08:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.368 08:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.368 08:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.368 08:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.368 08:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.368 08:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.368 08:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.368 08:03:44 -- setup/hugepages.sh@83 -- # : 512 00:03:11.368 08:03:44 -- setup/hugepages.sh@84 -- # : 1 00:03:11.368 08:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.368 08:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.368 08:03:44 -- setup/hugepages.sh@83 -- # : 0 00:03:11.368 08:03:44 -- setup/hugepages.sh@84 -- # : 0 00:03:11.369 08:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.369 08:03:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:11.369 08:03:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:11.369 08:03:44 -- setup/hugepages.sh@153 -- # setup output 00:03:11.369 08:03:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.369 08:03:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.903 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:14.474 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.474 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.474 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.474 08:03:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:14.474 08:03:48 -- setup/hugepages.sh@89 -- # local node 00:03:14.474 08:03:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.474 08:03:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.474 08:03:48 -- setup/hugepages.sh@92 -- # local surp 00:03:14.474 08:03:48 -- setup/hugepages.sh@93 -- # local resv 00:03:14.474 08:03:48 -- setup/hugepages.sh@94 -- # local anon 00:03:14.474 08:03:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.474 08:03:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.474 08:03:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.474 08:03:48 -- setup/common.sh@18 -- # local node= 00:03:14.474 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.474 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.474 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.474 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.474 08:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.474 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.474 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73607636 kB' 'MemAvailable: 78582504 kB' 'Buffers: 2696 kB' 'Cached: 14214212 kB' 'SwapCached: 0 kB' 'Active: 10083384 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517160 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527772 kB' 'Mapped: 207452 kB' 'Shmem: 8992284 kB' 'KReclaimable: 530080 kB' 'Slab: 1059164 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529084 kB' 'KernelStack: 19344 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10883980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212868 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.474 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.474 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.475 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.475 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.475 08:03:48 -- setup/common.sh@33 -- # echo 0 00:03:14.475 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.475 08:03:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:14.475 08:03:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.475 08:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.475 08:03:48 -- setup/common.sh@18 -- # local node= 00:03:14.475 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.475 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.475 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.476 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.476 08:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.476 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.476 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73609100 kB' 'MemAvailable: 78583968 kB' 'Buffers: 2696 kB' 'Cached: 14214212 kB' 'SwapCached: 0 kB' 'Active: 10082708 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516484 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527492 kB' 'Mapped: 207368 kB' 'Shmem: 8992284 kB' 'KReclaimable: 530080 kB' 'Slab: 1059124 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529044 kB' 'KernelStack: 19344 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10883992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.476 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 08:03:48 -- setup/common.sh@33 -- # echo 0 00:03:14.477 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.477 08:03:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:14.477 08:03:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.477 08:03:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.477 08:03:48 -- setup/common.sh@18 -- # local node= 00:03:14.477 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.477 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.477 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.477 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.477 08:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.477 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.477 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73609604 kB' 'MemAvailable: 78584472 kB' 'Buffers: 2696 kB' 'Cached: 14214212 kB' 'SwapCached: 0 kB' 'Active: 10082708 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516484 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527524 kB' 'Mapped: 207368 kB' 'Shmem: 8992284 kB' 'KReclaimable: 530080 kB' 'Slab: 1059124 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529044 kB' 'KernelStack: 19360 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10884008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.478 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 08:03:48 -- setup/common.sh@33 -- # echo 0 00:03:14.479 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.479 08:03:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:14.479 08:03:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.479 nr_hugepages=1024 00:03:14.479 08:03:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.479 resv_hugepages=0 00:03:14.479 08:03:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.479 surplus_hugepages=0 00:03:14.479 08:03:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.479 anon_hugepages=0 00:03:14.479 08:03:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.479 08:03:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.479 08:03:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.479 08:03:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.479 08:03:48 -- setup/common.sh@18 -- # local node= 00:03:14.479 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.479 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.479 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.479 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.479 08:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.479 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.479 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73609740 kB' 'MemAvailable: 78584608 kB' 'Buffers: 2696 kB' 'Cached: 14214212 kB' 'SwapCached: 0 kB' 'Active: 10082844 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516620 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527660 kB' 'Mapped: 207368 kB' 'Shmem: 8992284 kB' 'KReclaimable: 530080 kB' 'Slab: 1059124 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529044 kB' 'KernelStack: 19344 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10884020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.479 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.480 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 08:03:48 -- setup/common.sh@33 -- # echo 1024 00:03:14.481 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.481 08:03:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.481 08:03:48 -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.481 08:03:48 -- setup/hugepages.sh@27 -- # local node 00:03:14.481 08:03:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.481 08:03:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.481 08:03:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.481 08:03:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.481 08:03:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.481 08:03:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.481 08:03:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.481 08:03:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.481 08:03:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.481 08:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.481 08:03:48 -- setup/common.sh@18 -- # local node=0 00:03:14.481 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.481 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.481 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.481 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.481 08:03:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.481 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.481 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 25631316 kB' 'MemUsed: 7003312 kB' 'SwapCached: 0 kB' 'Active: 3231608 kB' 'Inactive: 1226712 kB' 'Active(anon): 2777884 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4114000 kB' 'Mapped: 156952 kB' 'AnonPages: 347556 kB' 'Shmem: 2433564 kB' 'KernelStack: 10984 kB' 'PageTables: 6216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 512968 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.481 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.741 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.741 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@33 -- # echo 0 00:03:14.742 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.742 08:03:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.742 08:03:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.742 08:03:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.742 08:03:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.742 08:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.742 08:03:48 -- setup/common.sh@18 -- # local node=1 00:03:14.742 08:03:48 -- setup/common.sh@19 -- # local var val 00:03:14.742 08:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:14.742 08:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.742 08:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.742 08:03:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.742 08:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.742 08:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 47980696 kB' 'MemUsed: 12707676 kB' 'SwapCached: 0 kB' 'Active: 6851536 kB' 'Inactive: 3431688 kB' 'Active(anon): 6739036 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10102964 kB' 'Mapped: 50416 kB' 'AnonPages: 180428 kB' 'Shmem: 6558776 kB' 'KernelStack: 8360 kB' 'PageTables: 2696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291164 kB' 'Slab: 546156 kB' 'SReclaimable: 291164 kB' 'SUnreclaim: 254992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.742 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.742 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.743 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.743 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 08:03:48 -- setup/common.sh@32 -- # continue 00:03:14.744 08:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:14.744 08:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:14.744 08:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.744 08:03:48 -- setup/common.sh@33 -- # echo 0 00:03:14.744 08:03:48 -- setup/common.sh@33 -- # return 0 00:03:14.744 08:03:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.744 08:03:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.744 08:03:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.744 08:03:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.744 node0=512 expecting 512 00:03:14.744 08:03:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.744 08:03:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.744 08:03:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.744 08:03:48 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.744 node1=512 expecting 512 00:03:14.744 08:03:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.744 00:03:14.744 real 0m3.455s 00:03:14.744 user 0m1.355s 00:03:14.744 sys 0m2.134s 00:03:14.744 08:03:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:14.744 08:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:14.744 ************************************ 00:03:14.744 END TEST even_2G_alloc 00:03:14.744 ************************************ 00:03:14.744 08:03:48 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:14.744 08:03:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:14.744 08:03:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:14.744 08:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:14.744 ************************************ 00:03:14.744 START TEST odd_alloc 00:03:14.744 ************************************ 00:03:14.744 08:03:48 -- common/autotest_common.sh@1102 -- # odd_alloc 00:03:14.744 08:03:48 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:14.744 08:03:48 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:14.744 08:03:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:14.744 08:03:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.744 08:03:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.744 08:03:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.744 08:03:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:14.744 08:03:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.744 08:03:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.744 08:03:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.744 08:03:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.744 08:03:48 -- setup/hugepages.sh@83 -- # : 513 00:03:14.744 08:03:48 -- setup/hugepages.sh@84 -- # : 1 00:03:14.744 08:03:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:14.744 08:03:48 -- setup/hugepages.sh@83 -- # : 0 00:03:14.744 08:03:48 -- setup/hugepages.sh@84 -- # : 0 00:03:14.744 08:03:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.744 08:03:48 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:14.744 08:03:48 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:14.744 08:03:48 -- setup/hugepages.sh@160 -- # setup output 00:03:14.744 08:03:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.744 08:03:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.276 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:17.535 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.535 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.535 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.798 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.798 08:03:51 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:17.798 08:03:51 -- setup/hugepages.sh@89 -- # local node 00:03:17.798 08:03:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.798 08:03:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.798 08:03:51 -- setup/hugepages.sh@92 -- # local surp 00:03:17.798 08:03:51 -- setup/hugepages.sh@93 -- # local resv 00:03:17.798 08:03:51 -- setup/hugepages.sh@94 -- # local anon 00:03:17.798 08:03:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.798 08:03:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.798 08:03:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.798 08:03:51 -- setup/common.sh@18 -- # local node= 00:03:17.798 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:17.798 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.798 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.798 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.798 08:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.798 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.798 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73573928 kB' 'MemAvailable: 78548796 kB' 'Buffers: 2696 kB' 'Cached: 14214336 kB' 'SwapCached: 0 kB' 'Active: 10082824 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516600 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527492 kB' 'Mapped: 207328 kB' 'Shmem: 8992408 kB' 'KReclaimable: 530080 kB' 'Slab: 1059084 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529004 kB' 'KernelStack: 19376 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 10884496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212932 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.798 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.798 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.799 08:03:51 -- setup/common.sh@33 -- # echo 0 00:03:17.799 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:17.799 08:03:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:17.799 08:03:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.799 08:03:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.799 08:03:51 -- setup/common.sh@18 -- # local node= 00:03:17.799 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:17.799 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.799 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.799 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.799 08:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.799 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.799 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73574744 kB' 'MemAvailable: 78549612 kB' 'Buffers: 2696 kB' 'Cached: 14214340 kB' 'SwapCached: 0 kB' 'Active: 10082464 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516240 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527136 kB' 'Mapped: 207388 kB' 'Shmem: 8992412 kB' 'KReclaimable: 530080 kB' 'Slab: 1059132 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 19360 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 10884508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.799 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.799 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.800 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.800 08:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.801 08:03:51 -- setup/common.sh@33 -- # echo 0 00:03:17.801 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:17.801 08:03:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:17.801 08:03:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.801 08:03:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.801 08:03:51 -- setup/common.sh@18 -- # local node= 00:03:17.801 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:17.801 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.801 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.801 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.801 08:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.801 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.801 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73574744 kB' 'MemAvailable: 78549612 kB' 'Buffers: 2696 kB' 'Cached: 14214352 kB' 'SwapCached: 0 kB' 'Active: 10082296 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516072 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526916 kB' 'Mapped: 207388 kB' 'Shmem: 8992424 kB' 'KReclaimable: 530080 kB' 'Slab: 1059132 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 19344 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 10884524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.801 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.801 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.802 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.802 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.802 08:03:51 -- setup/common.sh@33 -- # echo 0 00:03:17.802 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:17.802 08:03:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:17.802 08:03:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:17.802 nr_hugepages=1025 00:03:17.802 08:03:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.802 resv_hugepages=0 00:03:17.802 08:03:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.802 surplus_hugepages=0 00:03:17.802 08:03:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.802 anon_hugepages=0 00:03:17.803 08:03:51 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:17.803 08:03:51 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:17.803 08:03:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.803 08:03:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.803 08:03:51 -- setup/common.sh@18 -- # local node= 00:03:17.803 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:17.803 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.803 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.803 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.803 08:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.803 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.803 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73574264 kB' 'MemAvailable: 78549132 kB' 'Buffers: 2696 kB' 'Cached: 14214376 kB' 'SwapCached: 0 kB' 'Active: 10082124 kB' 'Inactive: 4658400 kB' 'Active(anon): 9515900 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526724 kB' 'Mapped: 207388 kB' 'Shmem: 8992448 kB' 'KReclaimable: 530080 kB' 'Slab: 1059132 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 19344 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000504 kB' 'Committed_AS: 10884536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212900 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.803 08:03:51 -- setup/common.sh@32 -- # continue 00:03:17.803 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.065 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.065 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.066 08:03:51 -- setup/common.sh@33 -- # echo 1025 00:03:18.066 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:18.066 08:03:51 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.066 08:03:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.066 08:03:51 -- setup/hugepages.sh@27 -- # local node 00:03:18.066 08:03:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.066 08:03:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.066 08:03:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.066 08:03:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:18.066 08:03:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.066 08:03:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.066 08:03:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.066 08:03:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.066 08:03:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.066 08:03:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.066 08:03:51 -- setup/common.sh@18 -- # local node=0 00:03:18.066 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:18.066 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.066 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.066 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.066 08:03:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.066 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.066 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 25602976 kB' 'MemUsed: 7031652 kB' 'SwapCached: 0 kB' 'Active: 3230060 kB' 'Inactive: 1226712 kB' 'Active(anon): 2776336 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4114016 kB' 'Mapped: 156960 kB' 'AnonPages: 345876 kB' 'Shmem: 2433580 kB' 'KernelStack: 10952 kB' 'PageTables: 6052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 512896 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 273980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.066 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.066 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@33 -- # echo 0 00:03:18.067 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:18.067 08:03:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.067 08:03:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.067 08:03:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.067 08:03:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.067 08:03:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.067 08:03:51 -- setup/common.sh@18 -- # local node=1 00:03:18.067 08:03:51 -- setup/common.sh@19 -- # local var val 00:03:18.067 08:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.067 08:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.067 08:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.067 08:03:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.067 08:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.067 08:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 47970276 kB' 'MemUsed: 12718096 kB' 'SwapCached: 0 kB' 'Active: 6852832 kB' 'Inactive: 3431688 kB' 'Active(anon): 6740332 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10103072 kB' 'Mapped: 50428 kB' 'AnonPages: 181592 kB' 'Shmem: 6558884 kB' 'KernelStack: 8424 kB' 'PageTables: 2932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291164 kB' 'Slab: 546236 kB' 'SReclaimable: 291164 kB' 'SUnreclaim: 255072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.067 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.067 08:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # continue 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.068 08:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.068 08:03:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.068 08:03:51 -- setup/common.sh@33 -- # echo 0 00:03:18.068 08:03:51 -- setup/common.sh@33 -- # return 0 00:03:18.068 08:03:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.068 08:03:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.068 08:03:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.068 08:03:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.068 08:03:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:18.068 node0=512 expecting 513 00:03:18.068 08:03:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.068 08:03:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.068 08:03:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.068 08:03:51 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:18.068 node1=513 expecting 512 00:03:18.068 08:03:51 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:18.068 00:03:18.068 real 0m3.285s 00:03:18.068 user 0m1.333s 00:03:18.068 sys 0m1.999s 00:03:18.068 08:03:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.068 08:03:51 -- common/autotest_common.sh@10 -- # set +x 00:03:18.069 ************************************ 00:03:18.069 END TEST odd_alloc 00:03:18.069 ************************************ 00:03:18.069 08:03:51 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:18.069 08:03:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:18.069 08:03:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:18.069 08:03:51 -- common/autotest_common.sh@10 -- # set +x 00:03:18.069 ************************************ 00:03:18.069 START TEST custom_alloc 00:03:18.069 ************************************ 00:03:18.069 08:03:51 -- common/autotest_common.sh@1102 -- # custom_alloc 00:03:18.069 08:03:51 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:18.069 08:03:51 -- setup/hugepages.sh@169 -- # local node 00:03:18.069 08:03:51 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:18.069 08:03:51 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:18.069 08:03:51 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:18.069 08:03:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.069 08:03:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.069 08:03:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.069 08:03:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.069 08:03:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.069 08:03:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.069 08:03:51 -- setup/hugepages.sh@83 -- # : 256 00:03:18.069 08:03:51 -- setup/hugepages.sh@84 -- # : 1 00:03:18.069 08:03:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.069 08:03:51 -- setup/hugepages.sh@83 -- # : 0 00:03:18.069 08:03:51 -- setup/hugepages.sh@84 -- # : 0 00:03:18.069 08:03:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:18.069 08:03:51 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:18.069 08:03:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.069 08:03:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.069 08:03:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.069 08:03:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.069 08:03:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.069 08:03:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.069 08:03:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.069 08:03:51 -- setup/hugepages.sh@78 -- # return 0 00:03:18.069 08:03:51 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:18.069 08:03:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.069 08:03:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.069 08:03:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.069 08:03:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.069 08:03:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.069 08:03:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.069 08:03:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.069 08:03:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.069 08:03:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:18.069 08:03:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.069 08:03:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.069 08:03:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.069 08:03:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:18.069 08:03:51 -- setup/hugepages.sh@78 -- # return 0 00:03:18.069 08:03:51 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:18.069 08:03:51 -- setup/hugepages.sh@187 -- # setup output 00:03:18.069 08:03:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.069 08:03:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.604 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:21.175 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.175 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.175 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.175 08:03:54 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:21.175 08:03:54 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:21.175 08:03:54 -- setup/hugepages.sh@89 -- # local node 00:03:21.175 08:03:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.175 08:03:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.175 08:03:54 -- setup/hugepages.sh@92 -- # local surp 00:03:21.175 08:03:54 -- setup/hugepages.sh@93 -- # local resv 00:03:21.175 08:03:54 -- setup/hugepages.sh@94 -- # local anon 00:03:21.175 08:03:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.175 08:03:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.175 08:03:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.175 08:03:54 -- setup/common.sh@18 -- # local node= 00:03:21.175 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.175 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.175 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.175 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.175 08:03:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.175 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.175 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72520884 kB' 'MemAvailable: 77495752 kB' 'Buffers: 2696 kB' 'Cached: 14214456 kB' 'SwapCached: 0 kB' 'Active: 10083652 kB' 'Inactive: 4658400 kB' 'Active(anon): 9517428 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528104 kB' 'Mapped: 207476 kB' 'Shmem: 8992528 kB' 'KReclaimable: 530080 kB' 'Slab: 1059312 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529232 kB' 'KernelStack: 19360 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 10885136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.175 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.175 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.176 08:03:54 -- setup/common.sh@33 -- # echo 0 00:03:21.176 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.176 08:03:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.176 08:03:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.176 08:03:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.176 08:03:54 -- setup/common.sh@18 -- # local node= 00:03:21.176 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.176 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.176 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.176 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.176 08:03:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.176 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.176 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72521724 kB' 'MemAvailable: 77496592 kB' 'Buffers: 2696 kB' 'Cached: 14214460 kB' 'SwapCached: 0 kB' 'Active: 10082952 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516728 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527408 kB' 'Mapped: 207416 kB' 'Shmem: 8992532 kB' 'KReclaimable: 530080 kB' 'Slab: 1059304 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529224 kB' 'KernelStack: 19360 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 10885148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.176 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.176 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.177 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.177 08:03:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.178 08:03:54 -- setup/common.sh@33 -- # echo 0 00:03:21.178 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.178 08:03:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.178 08:03:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.178 08:03:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.178 08:03:54 -- setup/common.sh@18 -- # local node= 00:03:21.178 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.178 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.178 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.178 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.178 08:03:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.178 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.178 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72520968 kB' 'MemAvailable: 77495836 kB' 'Buffers: 2696 kB' 'Cached: 14214472 kB' 'SwapCached: 0 kB' 'Active: 10083112 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516888 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527608 kB' 'Mapped: 207416 kB' 'Shmem: 8992544 kB' 'KReclaimable: 530080 kB' 'Slab: 1059304 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529224 kB' 'KernelStack: 19360 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 10885164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.178 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.178 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.179 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.179 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.180 08:03:54 -- setup/common.sh@33 -- # echo 0 00:03:21.180 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.180 08:03:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.180 08:03:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:21.180 nr_hugepages=1536 00:03:21.180 08:03:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.180 resv_hugepages=0 00:03:21.180 08:03:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.180 surplus_hugepages=0 00:03:21.180 08:03:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.180 anon_hugepages=0 00:03:21.180 08:03:54 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.180 08:03:54 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:21.180 08:03:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.180 08:03:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.180 08:03:54 -- setup/common.sh@18 -- # local node= 00:03:21.180 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.180 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.180 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.180 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.180 08:03:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.180 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.180 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.180 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 72520528 kB' 'MemAvailable: 77495396 kB' 'Buffers: 2696 kB' 'Cached: 14214496 kB' 'SwapCached: 0 kB' 'Active: 10082784 kB' 'Inactive: 4658400 kB' 'Active(anon): 9516560 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527204 kB' 'Mapped: 207416 kB' 'Shmem: 8992568 kB' 'KReclaimable: 530080 kB' 'Slab: 1059304 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529224 kB' 'KernelStack: 19344 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477240 kB' 'Committed_AS: 10885176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212852 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.180 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.180 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.181 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.181 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.441 08:03:54 -- setup/common.sh@33 -- # echo 1536 00:03:21.441 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.441 08:03:54 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.441 08:03:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.441 08:03:54 -- setup/hugepages.sh@27 -- # local node 00:03:21.441 08:03:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.441 08:03:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.441 08:03:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.441 08:03:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.441 08:03:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.441 08:03:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.441 08:03:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.441 08:03:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.441 08:03:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.441 08:03:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.441 08:03:54 -- setup/common.sh@18 -- # local node=0 00:03:21.441 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.441 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.441 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.441 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.441 08:03:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.441 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.441 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 25605652 kB' 'MemUsed: 7028976 kB' 'SwapCached: 0 kB' 'Active: 3230616 kB' 'Inactive: 1226712 kB' 'Active(anon): 2776892 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4114032 kB' 'Mapped: 156976 kB' 'AnonPages: 346432 kB' 'Shmem: 2433596 kB' 'KernelStack: 10968 kB' 'PageTables: 6108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 513236 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.441 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.441 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@33 -- # echo 0 00:03:21.442 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.442 08:03:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.442 08:03:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.442 08:03:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.442 08:03:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.442 08:03:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.442 08:03:54 -- setup/common.sh@18 -- # local node=1 00:03:21.442 08:03:54 -- setup/common.sh@19 -- # local var val 00:03:21.442 08:03:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.442 08:03:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.442 08:03:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.442 08:03:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.442 08:03:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.442 08:03:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688372 kB' 'MemFree: 46914868 kB' 'MemUsed: 13773504 kB' 'SwapCached: 0 kB' 'Active: 6852364 kB' 'Inactive: 3431688 kB' 'Active(anon): 6739864 kB' 'Inactive(anon): 0 kB' 'Active(file): 112500 kB' 'Inactive(file): 3431688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10103176 kB' 'Mapped: 50440 kB' 'AnonPages: 180948 kB' 'Shmem: 6558988 kB' 'KernelStack: 8376 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 291164 kB' 'Slab: 546068 kB' 'SReclaimable: 291164 kB' 'SUnreclaim: 254904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.442 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.442 08:03:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # continue 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.443 08:03:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.443 08:03:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.443 08:03:54 -- setup/common.sh@33 -- # echo 0 00:03:21.443 08:03:54 -- setup/common.sh@33 -- # return 0 00:03:21.443 08:03:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.443 08:03:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.443 08:03:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.443 08:03:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.443 08:03:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.443 node0=512 expecting 512 00:03:21.443 08:03:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.443 08:03:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.443 08:03:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.443 08:03:54 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:21.443 node1=1024 expecting 1024 00:03:21.443 08:03:54 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:21.443 00:03:21.443 real 0m3.349s 00:03:21.443 user 0m1.360s 00:03:21.443 sys 0m2.038s 00:03:21.443 08:03:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:21.443 08:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:21.443 ************************************ 00:03:21.443 END TEST custom_alloc 00:03:21.443 ************************************ 00:03:21.443 08:03:54 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:21.443 08:03:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:21.443 08:03:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:21.443 08:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:21.443 ************************************ 00:03:21.443 START TEST no_shrink_alloc 00:03:21.443 ************************************ 00:03:21.443 08:03:54 -- common/autotest_common.sh@1102 -- # no_shrink_alloc 00:03:21.443 08:03:54 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:21.443 08:03:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.443 08:03:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.443 08:03:54 -- setup/hugepages.sh@51 -- # shift 00:03:21.443 08:03:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.443 08:03:54 -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.443 08:03:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.443 08:03:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.443 08:03:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.443 08:03:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.443 08:03:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.444 08:03:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.444 08:03:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.444 08:03:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.444 08:03:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.444 08:03:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.444 08:03:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.444 08:03:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.444 08:03:54 -- setup/hugepages.sh@73 -- # return 0 00:03:21.444 08:03:54 -- setup/hugepages.sh@198 -- # setup output 00:03:21.444 08:03:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.444 08:03:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.978 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:24.236 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.236 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.236 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.497 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.497 08:03:58 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:24.497 08:03:58 -- setup/hugepages.sh@89 -- # local node 00:03:24.497 08:03:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.497 08:03:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.497 08:03:58 -- setup/hugepages.sh@92 -- # local surp 00:03:24.497 08:03:58 -- setup/hugepages.sh@93 -- # local resv 00:03:24.497 08:03:58 -- setup/hugepages.sh@94 -- # local anon 00:03:24.497 08:03:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.497 08:03:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.497 08:03:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.497 08:03:58 -- setup/common.sh@18 -- # local node= 00:03:24.497 08:03:58 -- setup/common.sh@19 -- # local var val 00:03:24.497 08:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.497 08:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.497 08:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.497 08:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.497 08:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.497 08:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 08:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73521208 kB' 'MemAvailable: 78496076 kB' 'Buffers: 2696 kB' 'Cached: 14214592 kB' 'SwapCached: 0 kB' 'Active: 10084684 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518460 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528524 kB' 'Mapped: 207472 kB' 'Shmem: 8992664 kB' 'KReclaimable: 530080 kB' 'Slab: 1059920 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529840 kB' 'KernelStack: 19328 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10885296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212868 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.497 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.497 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.498 08:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.498 08:03:58 -- setup/common.sh@33 -- # echo 0 00:03:24.498 08:03:58 -- setup/common.sh@33 -- # return 0 00:03:24.498 08:03:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.498 08:03:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.498 08:03:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.498 08:03:58 -- setup/common.sh@18 -- # local node= 00:03:24.498 08:03:58 -- setup/common.sh@19 -- # local var val 00:03:24.498 08:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.498 08:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.498 08:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.498 08:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.498 08:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.498 08:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.498 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73521208 kB' 'MemAvailable: 78496076 kB' 'Buffers: 2696 kB' 'Cached: 14214596 kB' 'SwapCached: 0 kB' 'Active: 10084860 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518636 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528760 kB' 'Mapped: 207472 kB' 'Shmem: 8992668 kB' 'KReclaimable: 530080 kB' 'Slab: 1059920 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529840 kB' 'KernelStack: 19376 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10885676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.499 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.499 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.500 08:03:58 -- setup/common.sh@33 -- # echo 0 00:03:24.500 08:03:58 -- setup/common.sh@33 -- # return 0 00:03:24.500 08:03:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.500 08:03:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.500 08:03:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.500 08:03:58 -- setup/common.sh@18 -- # local node= 00:03:24.500 08:03:58 -- setup/common.sh@19 -- # local var val 00:03:24.500 08:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.500 08:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.500 08:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.500 08:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.500 08:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.500 08:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73521468 kB' 'MemAvailable: 78496336 kB' 'Buffers: 2696 kB' 'Cached: 14214608 kB' 'SwapCached: 0 kB' 'Active: 10084392 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518168 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528760 kB' 'Mapped: 207396 kB' 'Shmem: 8992680 kB' 'KReclaimable: 530080 kB' 'Slab: 1059932 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529852 kB' 'KernelStack: 19344 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10885692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212820 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.500 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.500 08:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.501 08:03:58 -- setup/common.sh@33 -- # echo 0 00:03:24.501 08:03:58 -- setup/common.sh@33 -- # return 0 00:03:24.501 08:03:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.501 08:03:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.501 nr_hugepages=1024 00:03:24.501 08:03:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.501 resv_hugepages=0 00:03:24.501 08:03:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.501 surplus_hugepages=0 00:03:24.501 08:03:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.501 anon_hugepages=0 00:03:24.501 08:03:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.501 08:03:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.501 08:03:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.501 08:03:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.501 08:03:58 -- setup/common.sh@18 -- # local node= 00:03:24.501 08:03:58 -- setup/common.sh@19 -- # local var val 00:03:24.501 08:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.501 08:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.501 08:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.501 08:03:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.501 08:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.501 08:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.501 08:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73521852 kB' 'MemAvailable: 78496720 kB' 'Buffers: 2696 kB' 'Cached: 14214620 kB' 'SwapCached: 0 kB' 'Active: 10084360 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518136 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528732 kB' 'Mapped: 207396 kB' 'Shmem: 8992692 kB' 'KReclaimable: 530080 kB' 'Slab: 1059932 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529852 kB' 'KernelStack: 19360 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10885708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212836 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.501 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.501 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.502 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.502 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.503 08:03:58 -- setup/common.sh@33 -- # echo 1024 00:03:24.503 08:03:58 -- setup/common.sh@33 -- # return 0 00:03:24.503 08:03:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.503 08:03:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.503 08:03:58 -- setup/hugepages.sh@27 -- # local node 00:03:24.503 08:03:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.503 08:03:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.503 08:03:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.503 08:03:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.503 08:03:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.503 08:03:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.503 08:03:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.503 08:03:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.503 08:03:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.503 08:03:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.503 08:03:58 -- setup/common.sh@18 -- # local node=0 00:03:24.503 08:03:58 -- setup/common.sh@19 -- # local var val 00:03:24.503 08:03:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.503 08:03:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.503 08:03:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.503 08:03:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.503 08:03:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.503 08:03:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24557928 kB' 'MemUsed: 8076700 kB' 'SwapCached: 0 kB' 'Active: 3231384 kB' 'Inactive: 1226712 kB' 'Active(anon): 2777660 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4114084 kB' 'Mapped: 156980 kB' 'AnonPages: 347232 kB' 'Shmem: 2433648 kB' 'KernelStack: 11000 kB' 'PageTables: 6232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 513832 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.503 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.503 08:03:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # continue 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 08:03:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 08:03:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 08:03:58 -- setup/common.sh@33 -- # echo 0 00:03:24.763 08:03:58 -- setup/common.sh@33 -- # return 0 00:03:24.763 08:03:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.763 08:03:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.763 08:03:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.763 08:03:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.763 08:03:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.763 node0=1024 expecting 1024 00:03:24.763 08:03:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.763 08:03:58 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:24.763 08:03:58 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:24.763 08:03:58 -- setup/hugepages.sh@202 -- # setup output 00:03:24.763 08:03:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.763 08:03:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.294 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:27.552 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.552 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.552 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.552 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:27.814 08:04:01 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:27.814 08:04:01 -- setup/hugepages.sh@89 -- # local node 00:03:27.814 08:04:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.814 08:04:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.814 08:04:01 -- setup/hugepages.sh@92 -- # local surp 00:03:27.814 08:04:01 -- setup/hugepages.sh@93 -- # local resv 00:03:27.814 08:04:01 -- setup/hugepages.sh@94 -- # local anon 00:03:27.814 08:04:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.814 08:04:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.814 08:04:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.814 08:04:01 -- setup/common.sh@18 -- # local node= 00:03:27.814 08:04:01 -- setup/common.sh@19 -- # local var val 00:03:27.814 08:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.814 08:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.814 08:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.814 08:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.814 08:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.814 08:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.814 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.814 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.814 08:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73548752 kB' 'MemAvailable: 78523620 kB' 'Buffers: 2696 kB' 'Cached: 14214684 kB' 'SwapCached: 0 kB' 'Active: 10084840 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518616 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529160 kB' 'Mapped: 207408 kB' 'Shmem: 8992756 kB' 'KReclaimable: 530080 kB' 'Slab: 1059796 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529716 kB' 'KernelStack: 19328 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10886268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212980 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:27.814 08:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.814 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.815 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.815 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.816 08:04:01 -- setup/common.sh@33 -- # echo 0 00:03:27.816 08:04:01 -- setup/common.sh@33 -- # return 0 00:03:27.816 08:04:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.816 08:04:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.816 08:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.816 08:04:01 -- setup/common.sh@18 -- # local node= 00:03:27.816 08:04:01 -- setup/common.sh@19 -- # local var val 00:03:27.816 08:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.816 08:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.816 08:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.816 08:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.816 08:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.816 08:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73555428 kB' 'MemAvailable: 78530296 kB' 'Buffers: 2696 kB' 'Cached: 14214688 kB' 'SwapCached: 0 kB' 'Active: 10084796 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518572 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529064 kB' 'Mapped: 207408 kB' 'Shmem: 8992760 kB' 'KReclaimable: 530080 kB' 'Slab: 1059772 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529692 kB' 'KernelStack: 19248 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10886280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212916 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.816 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.816 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.817 08:04:01 -- setup/common.sh@33 -- # echo 0 00:03:27.817 08:04:01 -- setup/common.sh@33 -- # return 0 00:03:27.817 08:04:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.817 08:04:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.817 08:04:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.817 08:04:01 -- setup/common.sh@18 -- # local node= 00:03:27.817 08:04:01 -- setup/common.sh@19 -- # local var val 00:03:27.817 08:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.817 08:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.817 08:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.817 08:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.817 08:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.817 08:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73555580 kB' 'MemAvailable: 78530448 kB' 'Buffers: 2696 kB' 'Cached: 14214700 kB' 'SwapCached: 0 kB' 'Active: 10084428 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518204 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528632 kB' 'Mapped: 207408 kB' 'Shmem: 8992772 kB' 'KReclaimable: 530080 kB' 'Slab: 1059804 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529724 kB' 'KernelStack: 19344 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10886296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212916 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.817 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.817 08:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.818 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.818 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.818 08:04:01 -- setup/common.sh@33 -- # echo 0 00:03:27.818 08:04:01 -- setup/common.sh@33 -- # return 0 00:03:27.818 08:04:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.818 08:04:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.818 nr_hugepages=1024 00:03:27.818 08:04:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.818 resv_hugepages=0 00:03:27.818 08:04:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.818 surplus_hugepages=0 00:03:27.818 08:04:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.818 anon_hugepages=0 00:03:27.818 08:04:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.818 08:04:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.818 08:04:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.818 08:04:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.818 08:04:01 -- setup/common.sh@18 -- # local node= 00:03:27.818 08:04:01 -- setup/common.sh@19 -- # local var val 00:03:27.818 08:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.819 08:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.819 08:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.819 08:04:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.819 08:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.819 08:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93323000 kB' 'MemFree: 73554576 kB' 'MemAvailable: 78529444 kB' 'Buffers: 2696 kB' 'Cached: 14214712 kB' 'SwapCached: 0 kB' 'Active: 10084344 kB' 'Inactive: 4658400 kB' 'Active(anon): 9518120 kB' 'Inactive(anon): 0 kB' 'Active(file): 566224 kB' 'Inactive(file): 4658400 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528536 kB' 'Mapped: 207408 kB' 'Shmem: 8992784 kB' 'KReclaimable: 530080 kB' 'Slab: 1059804 kB' 'SReclaimable: 530080 kB' 'SUnreclaim: 529724 kB' 'KernelStack: 19328 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001528 kB' 'Committed_AS: 10899292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212932 kB' 'VmallocChunk: 0 kB' 'Percpu: 85248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2829268 kB' 'DirectMap2M: 14675968 kB' 'DirectMap1G: 84934656 kB' 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.819 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.819 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.820 08:04:01 -- setup/common.sh@33 -- # echo 1024 00:03:27.820 08:04:01 -- setup/common.sh@33 -- # return 0 00:03:27.820 08:04:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.820 08:04:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.820 08:04:01 -- setup/hugepages.sh@27 -- # local node 00:03:27.820 08:04:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.820 08:04:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.820 08:04:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.820 08:04:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.820 08:04:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.820 08:04:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.820 08:04:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.820 08:04:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.820 08:04:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.820 08:04:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.820 08:04:01 -- setup/common.sh@18 -- # local node=0 00:03:27.820 08:04:01 -- setup/common.sh@19 -- # local var val 00:03:27.820 08:04:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.820 08:04:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.820 08:04:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.820 08:04:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.820 08:04:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.820 08:04:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 24557908 kB' 'MemUsed: 8076720 kB' 'SwapCached: 0 kB' 'Active: 3231776 kB' 'Inactive: 1226712 kB' 'Active(anon): 2778052 kB' 'Inactive(anon): 0 kB' 'Active(file): 453724 kB' 'Inactive(file): 1226712 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4114160 kB' 'Mapped: 156992 kB' 'AnonPages: 347488 kB' 'Shmem: 2433724 kB' 'KernelStack: 11000 kB' 'PageTables: 6260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 238916 kB' 'Slab: 513452 kB' 'SReclaimable: 238916 kB' 'SUnreclaim: 274536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.820 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.820 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.821 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.821 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.822 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.822 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.822 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.822 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.822 08:04:01 -- setup/common.sh@32 -- # continue 00:03:27.822 08:04:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.822 08:04:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.822 08:04:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.822 08:04:01 -- setup/common.sh@33 -- # echo 0 00:03:27.822 08:04:01 -- setup/common.sh@33 -- # return 0 00:03:27.822 08:04:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.822 08:04:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.822 08:04:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.822 08:04:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.822 08:04:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.822 node0=1024 expecting 1024 00:03:27.822 08:04:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.822 00:03:27.822 real 0m6.444s 00:03:27.822 user 0m2.536s 00:03:27.822 sys 0m3.990s 00:03:27.822 08:04:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.822 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:27.822 ************************************ 00:03:27.822 END TEST no_shrink_alloc 00:03:27.822 ************************************ 00:03:27.822 08:04:01 -- setup/hugepages.sh@217 -- # clear_hp 00:03:27.822 08:04:01 -- setup/hugepages.sh@37 -- # local node hp 00:03:27.822 08:04:01 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:27.822 08:04:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.822 08:04:01 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.822 08:04:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.822 08:04:01 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.822 08:04:01 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:27.822 08:04:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.822 08:04:01 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.822 08:04:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.822 08:04:01 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.822 08:04:01 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:27.822 08:04:01 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:27.822 00:03:27.822 real 0m24.805s 00:03:27.822 user 0m9.640s 00:03:27.822 sys 0m14.704s 00:03:27.822 08:04:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.822 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:27.822 ************************************ 00:03:27.822 END TEST hugepages 00:03:27.822 ************************************ 00:03:27.822 08:04:01 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:27.822 08:04:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:27.822 08:04:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:27.822 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:27.822 ************************************ 00:03:27.822 START TEST driver 00:03:27.822 ************************************ 00:03:27.822 08:04:01 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:28.080 * Looking for test storage... 00:03:28.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:28.080 08:04:01 -- setup/driver.sh@68 -- # setup reset 00:03:28.080 08:04:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.080 08:04:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.373 08:04:06 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.373 08:04:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:33.373 08:04:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:33.373 08:04:06 -- common/autotest_common.sh@10 -- # set +x 00:03:33.373 ************************************ 00:03:33.373 START TEST guess_driver 00:03:33.373 ************************************ 00:03:33.374 08:04:06 -- common/autotest_common.sh@1102 -- # guess_driver 00:03:33.374 08:04:06 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.374 08:04:06 -- setup/driver.sh@47 -- # local fail=0 00:03:33.374 08:04:06 -- setup/driver.sh@49 -- # pick_driver 00:03:33.374 08:04:06 -- setup/driver.sh@36 -- # vfio 00:03:33.374 08:04:06 -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.374 08:04:06 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.374 08:04:06 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.374 08:04:06 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.374 08:04:06 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.374 08:04:06 -- setup/driver.sh@29 -- # (( 220 > 0 )) 00:03:33.374 08:04:06 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.374 08:04:06 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.374 08:04:06 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.374 08:04:06 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.374 08:04:06 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.374 08:04:06 -- setup/driver.sh@30 -- # return 0 00:03:33.374 08:04:06 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.374 08:04:06 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.374 08:04:06 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.374 08:04:06 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.374 Looking for driver=vfio-pci 00:03:33.374 08:04:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.374 08:04:06 -- setup/driver.sh@45 -- # setup output config 00:03:33.374 08:04:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.374 08:04:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.279 08:04:08 -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:03:35.279 08:04:08 -- setup/driver.sh@58 -- # continue 00:03:35.279 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.279 08:04:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.279 08:04:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.279 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:08 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.539 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.539 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.539 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.477 08:04:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.477 08:04:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.477 08:04:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.477 08:04:10 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:36.477 08:04:10 -- setup/driver.sh@65 -- # setup reset 00:03:36.477 08:04:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.477 08:04:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.752 00:03:41.752 real 0m8.412s 00:03:41.752 user 0m2.484s 00:03:41.752 sys 0m4.338s 00:03:41.752 08:04:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.752 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:41.752 ************************************ 00:03:41.752 END TEST guess_driver 00:03:41.752 ************************************ 00:03:41.752 00:03:41.752 real 0m13.034s 00:03:41.752 user 0m3.897s 00:03:41.752 sys 0m6.811s 00:03:41.752 08:04:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.752 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:41.752 ************************************ 00:03:41.752 END TEST driver 00:03:41.752 ************************************ 00:03:41.752 08:04:14 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.752 08:04:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.752 08:04:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.752 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:41.752 ************************************ 00:03:41.752 START TEST devices 00:03:41.752 ************************************ 00:03:41.752 08:04:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:41.752 * Looking for test storage... 00:03:41.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.752 08:04:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.752 08:04:14 -- setup/devices.sh@192 -- # setup reset 00:03:41.752 08:04:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.752 08:04:14 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.040 08:04:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:45.040 08:04:18 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:45.040 08:04:18 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:45.040 08:04:18 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:45.040 08:04:18 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.040 08:04:18 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:03:45.040 08:04:18 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:03:45.040 08:04:18 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.040 08:04:18 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n1 00:03:45.040 08:04:18 -- common/autotest_common.sh@1645 -- # local device=nvme1n1 00:03:45.040 08:04:18 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:03:45.040 08:04:18 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme1n2 00:03:45.040 08:04:18 -- common/autotest_common.sh@1645 -- # local device=nvme1n2 00:03:45.040 08:04:18 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1648 -- # [[ host-managed != none ]] 00:03:45.040 08:04:18 -- common/autotest_common.sh@1657 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:45.040 08:04:18 -- setup/devices.sh@196 -- # blocks=() 00:03:45.040 08:04:18 -- setup/devices.sh@196 -- # declare -a blocks 00:03:45.040 08:04:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:45.040 08:04:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:45.040 08:04:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:45.040 08:04:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:45.040 08:04:18 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:45.040 08:04:18 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:45.040 08:04:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:45.040 08:04:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:45.040 08:04:18 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:45.040 No valid GPT data, bailing 00:03:45.040 08:04:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:45.040 08:04:18 -- scripts/common.sh@393 -- # pt= 00:03:45.040 08:04:18 -- scripts/common.sh@394 -- # return 1 00:03:45.040 08:04:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:45.040 08:04:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:45.040 08:04:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:45.040 08:04:18 -- setup/common.sh@80 -- # echo 1000204886016 00:03:45.040 08:04:18 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:45.040 08:04:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:45.040 08:04:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:45.040 08:04:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:45.040 08:04:18 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:45.040 08:04:18 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:45.040 08:04:18 -- setup/devices.sh@203 -- # continue 00:03:45.040 08:04:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:45.040 08:04:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:45.040 08:04:18 -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:45.040 08:04:18 -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:45.040 08:04:18 -- setup/devices.sh@203 -- # continue 00:03:45.040 08:04:18 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:45.040 08:04:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:45.040 08:04:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:45.040 08:04:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:45.040 08:04:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:45.040 08:04:18 -- common/autotest_common.sh@10 -- # set +x 00:03:45.040 ************************************ 00:03:45.040 START TEST nvme_mount 00:03:45.040 ************************************ 00:03:45.040 08:04:18 -- common/autotest_common.sh@1102 -- # nvme_mount 00:03:45.040 08:04:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:45.040 08:04:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:45.040 08:04:18 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.040 08:04:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.040 08:04:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:45.040 08:04:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.040 08:04:18 -- setup/common.sh@40 -- # local part_no=1 00:03:45.040 08:04:18 -- setup/common.sh@41 -- # local size=1073741824 00:03:45.040 08:04:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.040 08:04:18 -- setup/common.sh@44 -- # parts=() 00:03:45.040 08:04:18 -- setup/common.sh@44 -- # local parts 00:03:45.041 08:04:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.041 08:04:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.041 08:04:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.041 08:04:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:45.041 08:04:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.041 08:04:18 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:45.041 08:04:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.041 08:04:18 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.978 Creating new GPT entries in memory. 00:03:45.978 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.978 other utilities. 00:03:45.978 08:04:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.978 08:04:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.978 08:04:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.978 08:04:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.978 08:04:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.914 Creating new GPT entries in memory. 00:03:46.914 The operation has completed successfully. 00:03:46.914 08:04:20 -- setup/common.sh@57 -- # (( part++ )) 00:03:46.914 08:04:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.914 08:04:20 -- setup/common.sh@62 -- # wait 2054193 00:03:46.914 08:04:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.914 08:04:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.914 08:04:20 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.914 08:04:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.914 08:04:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.914 08:04:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.914 08:04:20 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.914 08:04:20 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:46.914 08:04:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.914 08:04:20 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.914 08:04:20 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.914 08:04:20 -- setup/devices.sh@53 -- # local found=0 00:03:46.914 08:04:20 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.914 08:04:20 -- setup/devices.sh@56 -- # : 00:03:46.914 08:04:20 -- setup/devices.sh@59 -- # local pci status 00:03:46.914 08:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.914 08:04:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:46.914 08:04:20 -- setup/devices.sh@47 -- # setup output config 00:03:46.914 08:04:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.914 08:04:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:50.200 08:04:23 -- setup/devices.sh@63 -- # found=1 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.200 08:04:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.200 08:04:23 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:50.200 08:04:23 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.200 08:04:23 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.200 08:04:23 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.200 08:04:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:50.200 08:04:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.200 08:04:23 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.200 08:04:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.200 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.200 08:04:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.200 08:04:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.458 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.459 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.459 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.459 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.459 08:04:23 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:50.459 08:04:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:50.459 08:04:23 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.459 08:04:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:50.459 08:04:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:50.459 08:04:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.459 08:04:24 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.459 08:04:24 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:50.459 08:04:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:50.459 08:04:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.459 08:04:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.459 08:04:24 -- setup/devices.sh@53 -- # local found=0 00:03:50.459 08:04:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:50.459 08:04:24 -- setup/devices.sh@56 -- # : 00:03:50.459 08:04:24 -- setup/devices.sh@59 -- # local pci status 00:03:50.459 08:04:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.459 08:04:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:50.459 08:04:24 -- setup/devices.sh@47 -- # setup output config 00:03:50.459 08:04:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.459 08:04:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.990 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.990 08:04:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:52.990 08:04:26 -- setup/devices.sh@63 -- # found=1 00:03:52.990 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.990 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.990 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.248 08:04:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:53.248 08:04:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.507 08:04:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.507 08:04:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:53.507 08:04:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.508 08:04:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.508 08:04:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:53.508 08:04:27 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.508 08:04:27 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:53.508 08:04:27 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:53.508 08:04:27 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:53.508 08:04:27 -- setup/devices.sh@50 -- # local mount_point= 00:03:53.508 08:04:27 -- setup/devices.sh@51 -- # local test_file= 00:03:53.508 08:04:27 -- setup/devices.sh@53 -- # local found=0 00:03:53.508 08:04:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.508 08:04:27 -- setup/devices.sh@59 -- # local pci status 00:03:53.508 08:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.508 08:04:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:53.508 08:04:27 -- setup/devices.sh@47 -- # setup output config 00:03:53.508 08:04:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.508 08:04:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.834 08:04:29 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:29 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:56.834 08:04:29 -- setup/devices.sh@63 -- # found=1 00:03:56.834 08:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:29 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.834 08:04:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.834 08:04:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.834 08:04:30 -- setup/devices.sh@68 -- # return 0 00:03:56.834 08:04:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.834 08:04:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.834 08:04:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.834 08:04:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.834 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.834 00:03:56.834 real 0m12.078s 00:03:56.834 user 0m3.673s 00:03:56.834 sys 0m6.201s 00:03:56.834 08:04:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.834 08:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.834 ************************************ 00:03:56.834 END TEST nvme_mount 00:03:56.834 ************************************ 00:03:56.834 08:04:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.834 08:04:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:56.834 08:04:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:56.834 08:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.834 ************************************ 00:03:56.834 START TEST dm_mount 00:03:56.834 ************************************ 00:03:56.834 08:04:30 -- common/autotest_common.sh@1102 -- # dm_mount 00:03:56.834 08:04:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:56.834 08:04:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:56.834 08:04:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:56.834 08:04:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:56.834 08:04:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.834 08:04:30 -- setup/common.sh@40 -- # local part_no=2 00:03:56.834 08:04:30 -- setup/common.sh@41 -- # local size=1073741824 00:03:56.834 08:04:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.834 08:04:30 -- setup/common.sh@44 -- # parts=() 00:03:56.834 08:04:30 -- setup/common.sh@44 -- # local parts 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.834 08:04:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.834 08:04:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.834 08:04:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.834 08:04:30 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.834 08:04:30 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.834 08:04:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:57.770 Creating new GPT entries in memory. 00:03:57.770 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.770 other utilities. 00:03:57.770 08:04:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.770 08:04:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.770 08:04:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.770 08:04:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.770 08:04:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.145 Creating new GPT entries in memory. 00:03:59.145 The operation has completed successfully. 00:03:59.145 08:04:32 -- setup/common.sh@57 -- # (( part++ )) 00:03:59.145 08:04:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.145 08:04:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.145 08:04:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.145 08:04:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:00.080 The operation has completed successfully. 00:04:00.080 08:04:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.080 08:04:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.080 08:04:33 -- setup/common.sh@62 -- # wait 2058951 00:04:00.080 08:04:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:00.080 08:04:33 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.080 08:04:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.080 08:04:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:00.080 08:04:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:00.080 08:04:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.080 08:04:33 -- setup/devices.sh@161 -- # break 00:04:00.080 08:04:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.080 08:04:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:00.080 08:04:33 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:00.080 08:04:33 -- setup/devices.sh@166 -- # dm=dm-0 00:04:00.080 08:04:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:00.080 08:04:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:00.080 08:04:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.080 08:04:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:00.080 08:04:33 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.080 08:04:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:00.080 08:04:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:00.080 08:04:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.080 08:04:33 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.080 08:04:33 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.080 08:04:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:00.080 08:04:33 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:00.080 08:04:33 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:00.080 08:04:33 -- setup/devices.sh@53 -- # local found=0 00:04:00.080 08:04:33 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:00.080 08:04:33 -- setup/devices.sh@56 -- # : 00:04:00.080 08:04:33 -- setup/devices.sh@59 -- # local pci status 00:04:00.080 08:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.080 08:04:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.080 08:04:33 -- setup/devices.sh@47 -- # setup output config 00:04:00.080 08:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.080 08:04:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:03.360 08:04:36 -- setup/devices.sh@63 -- # found=1 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.360 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.360 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.361 08:04:36 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:03.361 08:04:36 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.361 08:04:36 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.361 08:04:36 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.361 08:04:36 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.361 08:04:36 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:03.361 08:04:36 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:03.361 08:04:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:03.361 08:04:36 -- setup/devices.sh@50 -- # local mount_point= 00:04:03.361 08:04:36 -- setup/devices.sh@51 -- # local test_file= 00:04:03.361 08:04:36 -- setup/devices.sh@53 -- # local found=0 00:04:03.361 08:04:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.361 08:04:36 -- setup/devices.sh@59 -- # local pci status 00:04:03.361 08:04:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.361 08:04:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:03.361 08:04:36 -- setup/devices.sh@47 -- # setup output config 00:04:03.361 08:04:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.361 08:04:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.647 08:04:39 -- setup/devices.sh@63 -- # found=1 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.647 08:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.647 08:04:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.647 08:04:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:06.647 08:04:40 -- setup/devices.sh@68 -- # return 0 00:04:06.647 08:04:40 -- setup/devices.sh@187 -- # cleanup_dm 00:04:06.647 08:04:40 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.647 08:04:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.647 08:04:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:06.647 08:04:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.647 08:04:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:06.647 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:06.647 08:04:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.647 08:04:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:06.647 00:04:06.647 real 0m9.647s 00:04:06.647 user 0m2.514s 00:04:06.647 sys 0m4.167s 00:04:06.647 08:04:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.647 08:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.647 ************************************ 00:04:06.647 END TEST dm_mount 00:04:06.647 ************************************ 00:04:06.647 08:04:40 -- setup/devices.sh@1 -- # cleanup 00:04:06.647 08:04:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.647 08:04:40 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.647 08:04:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.647 08:04:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.647 08:04:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.647 08:04:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.905 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.905 08:04:40 -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.905 08:04:40 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.905 08:04:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.905 08:04:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.905 08:04:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.905 08:04:40 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.905 08:04:40 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.905 00:04:06.905 real 0m25.828s 00:04:06.905 user 0m7.641s 00:04:06.905 sys 0m12.865s 00:04:06.905 08:04:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.905 08:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.905 ************************************ 00:04:06.905 END TEST devices 00:04:06.905 ************************************ 00:04:06.905 00:04:06.905 real 1m25.787s 00:04:06.905 user 0m28.802s 00:04:06.905 sys 0m47.497s 00:04:06.905 08:04:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.905 08:04:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.905 ************************************ 00:04:06.905 END TEST setup.sh 00:04:06.906 ************************************ 00:04:06.906 08:04:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:10.187 Hugepages 00:04:10.187 node hugesize free / total 00:04:10.187 node0 1048576kB 0 / 0 00:04:10.187 node0 2048kB 2048 / 2048 00:04:10.187 node1 1048576kB 0 / 0 00:04:10.187 node1 2048kB 0 / 0 00:04:10.187 00:04:10.187 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.187 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:10.187 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:10.187 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:10.187 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:10.187 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:10.187 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:10.188 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:10.188 08:04:43 -- spdk/autotest.sh@141 -- # uname -s 00:04:10.188 08:04:43 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:10.188 08:04:43 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:10.188 08:04:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.717 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:12.976 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.976 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.976 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.976 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.976 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.976 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.235 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.171 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.171 08:04:47 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:15.108 08:04:48 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:15.108 08:04:48 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:15.108 08:04:48 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.108 08:04:48 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:04:15.108 08:04:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:15.108 08:04:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:15.108 08:04:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.108 08:04:48 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:15.108 08:04:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:15.108 08:04:48 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:15.108 08:04:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:15.108 08:04:48 -- common/autotest_common.sh@1519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.421 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:18.421 Waiting for block devices as requested 00:04:18.421 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.421 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.421 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.421 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.683 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.683 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.683 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.683 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:18.943 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.944 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.944 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.944 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.202 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.202 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.202 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.461 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.461 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.461 08:04:53 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:19.461 08:04:53 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:19.461 08:04:53 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:19.461 08:04:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:19.461 08:04:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:04:19.461 08:04:53 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:19.461 08:04:53 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:19.461 08:04:53 -- common/autotest_common.sh@1528 -- # oacs=' 0xf' 00:04:19.461 08:04:53 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:19.461 08:04:53 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:19.461 08:04:53 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:04:19.461 08:04:53 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:19.461 08:04:53 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:19.461 08:04:53 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:19.461 08:04:53 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:19.461 08:04:53 -- common/autotest_common.sh@1540 -- # continue 00:04:19.461 08:04:53 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:19.461 08:04:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:19.461 08:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:19.461 08:04:53 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:19.461 08:04:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:19.461 08:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:19.719 08:04:53 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.248 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:22.505 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.505 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.440 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.440 08:04:57 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:23.440 08:04:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:23.440 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:23.440 08:04:57 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:23.440 08:04:57 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:23.699 08:04:57 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:23.699 08:04:57 -- common/autotest_common.sh@1560 -- # bdfs=() 00:04:23.699 08:04:57 -- common/autotest_common.sh@1560 -- # local bdfs 00:04:23.699 08:04:57 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:23.699 08:04:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:23.699 08:04:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:23.699 08:04:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.699 08:04:57 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.699 08:04:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:23.699 08:04:57 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:23.699 08:04:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:23.699 08:04:57 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:23.699 08:04:57 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:23.699 08:04:57 -- common/autotest_common.sh@1563 -- # device=0x0a54 00:04:23.699 08:04:57 -- common/autotest_common.sh@1564 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:23.699 08:04:57 -- common/autotest_common.sh@1565 -- # bdfs+=($bdf) 00:04:23.699 08:04:57 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:5e:00.0 00:04:23.699 08:04:57 -- common/autotest_common.sh@1575 -- # [[ -z 0000:5e:00.0 ]] 00:04:23.699 08:04:57 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=2069108 00:04:23.699 08:04:57 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.699 08:04:57 -- common/autotest_common.sh@1581 -- # waitforlisten 2069108 00:04:23.699 08:04:57 -- common/autotest_common.sh@817 -- # '[' -z 2069108 ']' 00:04:23.699 08:04:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.699 08:04:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:23.699 08:04:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.699 08:04:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:23.699 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:23.699 [2024-02-13 08:04:57.260985] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:23.699 [2024-02-13 08:04:57.261031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069108 ] 00:04:23.699 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.699 [2024-02-13 08:04:57.318843] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.957 [2024-02-13 08:04:57.395936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:23.957 [2024-02-13 08:04:57.396041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.523 08:04:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:24.523 08:04:58 -- common/autotest_common.sh@850 -- # return 0 00:04:24.523 08:04:58 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:04:24.523 08:04:58 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:04:24.523 08:04:58 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:27.805 nvme0n1 00:04:27.805 08:05:01 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:27.805 [2024-02-13 08:05:01.179114] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:27.805 [2024-02-13 08:05:01.179146] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:27.805 request: 00:04:27.805 { 00:04:27.805 "nvme_ctrlr_name": "nvme0", 00:04:27.805 "password": "test", 00:04:27.805 "method": "bdev_nvme_opal_revert", 00:04:27.805 "req_id": 1 00:04:27.805 } 00:04:27.805 Got JSON-RPC error response 00:04:27.805 response: 00:04:27.805 { 00:04:27.805 "code": -32603, 00:04:27.805 "message": "Internal error" 00:04:27.805 } 00:04:27.805 08:05:01 -- common/autotest_common.sh@1587 -- # true 00:04:27.805 08:05:01 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:04:27.805 08:05:01 -- common/autotest_common.sh@1591 -- # killprocess 2069108 00:04:27.805 08:05:01 -- common/autotest_common.sh@924 -- # '[' -z 2069108 ']' 00:04:27.805 08:05:01 -- common/autotest_common.sh@928 -- # kill -0 2069108 00:04:27.805 08:05:01 -- common/autotest_common.sh@929 -- # uname 00:04:27.805 08:05:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:27.805 08:05:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2069108 00:04:27.805 08:05:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:27.805 08:05:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:27.805 08:05:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2069108' 00:04:27.805 killing process with pid 2069108 00:04:27.805 08:05:01 -- common/autotest_common.sh@943 -- # kill 2069108 00:04:27.805 08:05:01 -- common/autotest_common.sh@948 -- # wait 2069108 00:04:29.705 08:05:02 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:29.705 08:05:02 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:29.705 08:05:02 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:29.705 08:05:02 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:29.705 08:05:02 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:29.705 08:05:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.705 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.705 08:05:02 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.705 08:05:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.705 08:05:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.705 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.705 ************************************ 00:04:29.705 START TEST env 00:04:29.705 ************************************ 00:04:29.705 08:05:02 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.705 * Looking for test storage... 00:04:29.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:29.705 08:05:02 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.705 08:05:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.705 08:05:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.705 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.705 ************************************ 00:04:29.705 START TEST env_memory 00:04:29.705 ************************************ 00:04:29.705 08:05:02 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:29.705 00:04:29.705 00:04:29.705 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.705 http://cunit.sourceforge.net/ 00:04:29.705 00:04:29.705 00:04:29.705 Suite: memory 00:04:29.705 Test: alloc and free memory map ...[2024-02-13 08:05:03.002416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.705 passed 00:04:29.705 Test: mem map translation ...[2024-02-13 08:05:03.021243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.705 [2024-02-13 08:05:03.021257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.705 [2024-02-13 08:05:03.021309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.705 [2024-02-13 08:05:03.021315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.705 passed 00:04:29.705 Test: mem map registration ...[2024-02-13 08:05:03.059235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:29.705 [2024-02-13 08:05:03.059249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:29.705 passed 00:04:29.705 Test: mem map adjacent registrations ...passed 00:04:29.705 00:04:29.705 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.705 suites 1 1 n/a 0 0 00:04:29.705 tests 4 4 4 0 0 00:04:29.706 asserts 152 152 152 0 n/a 00:04:29.706 00:04:29.706 Elapsed time = 0.138 seconds 00:04:29.706 00:04:29.706 real 0m0.149s 00:04:29.706 user 0m0.141s 00:04:29.706 sys 0m0.007s 00:04:29.706 08:05:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.706 08:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.706 ************************************ 00:04:29.706 END TEST env_memory 00:04:29.706 ************************************ 00:04:29.706 08:05:03 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:29.706 08:05:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.706 08:05:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.706 08:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.706 ************************************ 00:04:29.706 START TEST env_vtophys 00:04:29.706 ************************************ 00:04:29.706 08:05:03 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:29.706 EAL: lib.eal log level changed from notice to debug 00:04:29.706 EAL: Detected lcore 0 as core 0 on socket 0 00:04:29.706 EAL: Detected lcore 1 as core 1 on socket 0 00:04:29.706 EAL: Detected lcore 2 as core 2 on socket 0 00:04:29.706 EAL: Detected lcore 3 as core 3 on socket 0 00:04:29.706 EAL: Detected lcore 4 as core 4 on socket 0 00:04:29.706 EAL: Detected lcore 5 as core 5 on socket 0 00:04:29.706 EAL: Detected lcore 6 as core 6 on socket 0 00:04:29.706 EAL: Detected lcore 7 as core 8 on socket 0 00:04:29.706 EAL: Detected lcore 8 as core 9 on socket 0 00:04:29.706 EAL: Detected lcore 9 as core 10 on socket 0 00:04:29.706 EAL: Detected lcore 10 as core 11 on socket 0 00:04:29.706 EAL: Detected lcore 11 as core 12 on socket 0 00:04:29.706 EAL: Detected lcore 12 as core 13 on socket 0 00:04:29.706 EAL: Detected lcore 13 as core 16 on socket 0 00:04:29.706 EAL: Detected lcore 14 as core 17 on socket 0 00:04:29.706 EAL: Detected lcore 15 as core 18 on socket 0 00:04:29.706 EAL: Detected lcore 16 as core 19 on socket 0 00:04:29.706 EAL: Detected lcore 17 as core 20 on socket 0 00:04:29.706 EAL: Detected lcore 18 as core 21 on socket 0 00:04:29.706 EAL: Detected lcore 19 as core 25 on socket 0 00:04:29.706 EAL: Detected lcore 20 as core 26 on socket 0 00:04:29.706 EAL: Detected lcore 21 as core 27 on socket 0 00:04:29.706 EAL: Detected lcore 22 as core 28 on socket 0 00:04:29.706 EAL: Detected lcore 23 as core 29 on socket 0 00:04:29.706 EAL: Detected lcore 24 as core 0 on socket 1 00:04:29.706 EAL: Detected lcore 25 as core 1 on socket 1 00:04:29.706 EAL: Detected lcore 26 as core 2 on socket 1 00:04:29.706 EAL: Detected lcore 27 as core 3 on socket 1 00:04:29.706 EAL: Detected lcore 28 as core 4 on socket 1 00:04:29.706 EAL: Detected lcore 29 as core 5 on socket 1 00:04:29.706 EAL: Detected lcore 30 as core 6 on socket 1 00:04:29.706 EAL: Detected lcore 31 as core 8 on socket 1 00:04:29.706 EAL: Detected lcore 32 as core 9 on socket 1 00:04:29.706 EAL: Detected lcore 33 as core 10 on socket 1 00:04:29.706 EAL: Detected lcore 34 as core 11 on socket 1 00:04:29.706 EAL: Detected lcore 35 as core 12 on socket 1 00:04:29.706 EAL: Detected lcore 36 as core 13 on socket 1 00:04:29.706 EAL: Detected lcore 37 as core 16 on socket 1 00:04:29.706 EAL: Detected lcore 38 as core 17 on socket 1 00:04:29.706 EAL: Detected lcore 39 as core 18 on socket 1 00:04:29.706 EAL: Detected lcore 40 as core 19 on socket 1 00:04:29.706 EAL: Detected lcore 41 as core 20 on socket 1 00:04:29.706 EAL: Detected lcore 42 as core 21 on socket 1 00:04:29.706 EAL: Detected lcore 43 as core 25 on socket 1 00:04:29.706 EAL: Detected lcore 44 as core 26 on socket 1 00:04:29.706 EAL: Detected lcore 45 as core 27 on socket 1 00:04:29.706 EAL: Detected lcore 46 as core 28 on socket 1 00:04:29.706 EAL: Detected lcore 47 as core 29 on socket 1 00:04:29.706 EAL: Detected lcore 48 as core 0 on socket 0 00:04:29.706 EAL: Detected lcore 49 as core 1 on socket 0 00:04:29.706 EAL: Detected lcore 50 as core 2 on socket 0 00:04:29.706 EAL: Detected lcore 51 as core 3 on socket 0 00:04:29.706 EAL: Detected lcore 52 as core 4 on socket 0 00:04:29.706 EAL: Detected lcore 53 as core 5 on socket 0 00:04:29.706 EAL: Detected lcore 54 as core 6 on socket 0 00:04:29.706 EAL: Detected lcore 55 as core 8 on socket 0 00:04:29.706 EAL: Detected lcore 56 as core 9 on socket 0 00:04:29.706 EAL: Detected lcore 57 as core 10 on socket 0 00:04:29.706 EAL: Detected lcore 58 as core 11 on socket 0 00:04:29.706 EAL: Detected lcore 59 as core 12 on socket 0 00:04:29.706 EAL: Detected lcore 60 as core 13 on socket 0 00:04:29.706 EAL: Detected lcore 61 as core 16 on socket 0 00:04:29.706 EAL: Detected lcore 62 as core 17 on socket 0 00:04:29.706 EAL: Detected lcore 63 as core 18 on socket 0 00:04:29.706 EAL: Detected lcore 64 as core 19 on socket 0 00:04:29.706 EAL: Detected lcore 65 as core 20 on socket 0 00:04:29.706 EAL: Detected lcore 66 as core 21 on socket 0 00:04:29.706 EAL: Detected lcore 67 as core 25 on socket 0 00:04:29.706 EAL: Detected lcore 68 as core 26 on socket 0 00:04:29.706 EAL: Detected lcore 69 as core 27 on socket 0 00:04:29.706 EAL: Detected lcore 70 as core 28 on socket 0 00:04:29.706 EAL: Detected lcore 71 as core 29 on socket 0 00:04:29.706 EAL: Detected lcore 72 as core 0 on socket 1 00:04:29.706 EAL: Detected lcore 73 as core 1 on socket 1 00:04:29.706 EAL: Detected lcore 74 as core 2 on socket 1 00:04:29.706 EAL: Detected lcore 75 as core 3 on socket 1 00:04:29.706 EAL: Detected lcore 76 as core 4 on socket 1 00:04:29.706 EAL: Detected lcore 77 as core 5 on socket 1 00:04:29.706 EAL: Detected lcore 78 as core 6 on socket 1 00:04:29.706 EAL: Detected lcore 79 as core 8 on socket 1 00:04:29.706 EAL: Detected lcore 80 as core 9 on socket 1 00:04:29.706 EAL: Detected lcore 81 as core 10 on socket 1 00:04:29.706 EAL: Detected lcore 82 as core 11 on socket 1 00:04:29.706 EAL: Detected lcore 83 as core 12 on socket 1 00:04:29.706 EAL: Detected lcore 84 as core 13 on socket 1 00:04:29.706 EAL: Detected lcore 85 as core 16 on socket 1 00:04:29.706 EAL: Detected lcore 86 as core 17 on socket 1 00:04:29.706 EAL: Detected lcore 87 as core 18 on socket 1 00:04:29.706 EAL: Detected lcore 88 as core 19 on socket 1 00:04:29.706 EAL: Detected lcore 89 as core 20 on socket 1 00:04:29.706 EAL: Detected lcore 90 as core 21 on socket 1 00:04:29.706 EAL: Detected lcore 91 as core 25 on socket 1 00:04:29.706 EAL: Detected lcore 92 as core 26 on socket 1 00:04:29.706 EAL: Detected lcore 93 as core 27 on socket 1 00:04:29.706 EAL: Detected lcore 94 as core 28 on socket 1 00:04:29.706 EAL: Detected lcore 95 as core 29 on socket 1 00:04:29.706 EAL: Maximum logical cores by configuration: 128 00:04:29.706 EAL: Detected CPU lcores: 96 00:04:29.706 EAL: Detected NUMA nodes: 2 00:04:29.706 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:29.706 EAL: Detected shared linkage of DPDK 00:04:29.706 EAL: No shared files mode enabled, IPC will be disabled 00:04:29.706 EAL: Bus pci wants IOVA as 'DC' 00:04:29.706 EAL: Buses did not request a specific IOVA mode. 00:04:29.706 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:29.706 EAL: Selected IOVA mode 'VA' 00:04:29.706 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.706 EAL: Probing VFIO support... 00:04:29.706 EAL: IOMMU type 1 (Type 1) is supported 00:04:29.706 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:29.706 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:29.706 EAL: VFIO support initialized 00:04:29.706 EAL: Ask a virtual area of 0x2e000 bytes 00:04:29.706 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:29.706 EAL: Setting up physically contiguous memory... 00:04:29.706 EAL: Setting maximum number of open files to 524288 00:04:29.706 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:29.706 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:29.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:29.706 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.706 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:29.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.706 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.706 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:29.706 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:29.706 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.706 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:29.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.706 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.706 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:29.706 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:29.706 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.706 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:29.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.706 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.706 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:29.706 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:29.706 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.706 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:29.706 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.706 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.706 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:29.706 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:29.706 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:29.706 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.706 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:29.707 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.707 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.707 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:29.707 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:29.707 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.707 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:29.707 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.707 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.707 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:29.707 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:29.707 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.707 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:29.707 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.707 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.707 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:29.707 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:29.707 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.707 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:29.707 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.707 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.707 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:29.707 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:29.707 EAL: Hugepages will be freed exactly as allocated. 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: TSC frequency is ~2100000 KHz 00:04:29.707 EAL: Main lcore 0 is ready (tid=7fda28064a00;cpuset=[0]) 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 0 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 2MB 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:29.707 EAL: Mem event callback 'spdk:(nil)' registered 00:04:29.707 00:04:29.707 00:04:29.707 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.707 http://cunit.sourceforge.net/ 00:04:29.707 00:04:29.707 00:04:29.707 Suite: components_suite 00:04:29.707 Test: vtophys_malloc_test ...passed 00:04:29.707 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.707 EAL: Trying to obtain current memory policy. 00:04:29.707 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.707 EAL: Restoring previous memory policy: 4 00:04:29.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.707 EAL: request: mp_malloc_sync 00:04:29.707 EAL: No shared files mode enabled, IPC is disabled 00:04:29.707 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.965 EAL: request: mp_malloc_sync 00:04:29.965 EAL: No shared files mode enabled, IPC is disabled 00:04:29.965 EAL: Heap on socket 0 was shrunk by 258MB 00:04:29.965 EAL: Trying to obtain current memory policy. 00:04:29.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.965 EAL: Restoring previous memory policy: 4 00:04:29.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.965 EAL: request: mp_malloc_sync 00:04:29.965 EAL: No shared files mode enabled, IPC is disabled 00:04:29.965 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.223 EAL: request: mp_malloc_sync 00:04:30.223 EAL: No shared files mode enabled, IPC is disabled 00:04:30.223 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.223 EAL: Trying to obtain current memory policy. 00:04:30.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.223 EAL: Restoring previous memory policy: 4 00:04:30.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.223 EAL: request: mp_malloc_sync 00:04:30.223 EAL: No shared files mode enabled, IPC is disabled 00:04:30.223 EAL: Heap on socket 0 was expanded by 1026MB 00:04:30.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.739 EAL: request: mp_malloc_sync 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.739 passed 00:04:30.739 00:04:30.739 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.739 suites 1 1 n/a 0 0 00:04:30.739 tests 2 2 2 0 0 00:04:30.739 asserts 497 497 497 0 n/a 00:04:30.739 00:04:30.739 Elapsed time = 0.958 seconds 00:04:30.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.739 EAL: request: mp_malloc_sync 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 EAL: No shared files mode enabled, IPC is disabled 00:04:30.739 00:04:30.739 real 0m1.075s 00:04:30.739 user 0m0.630s 00:04:30.739 sys 0m0.417s 00:04:30.739 08:05:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.739 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:30.739 ************************************ 00:04:30.739 END TEST env_vtophys 00:04:30.739 ************************************ 00:04:30.739 08:05:04 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.739 08:05:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:30.739 08:05:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:30.739 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:30.739 ************************************ 00:04:30.739 START TEST env_pci 00:04:30.739 ************************************ 00:04:30.739 08:05:04 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.739 00:04:30.739 00:04:30.739 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.739 http://cunit.sourceforge.net/ 00:04:30.739 00:04:30.739 00:04:30.739 Suite: pci 00:04:30.739 Test: pci_hook ...[2024-02-13 08:05:04.269300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2070463 has claimed it 00:04:30.739 EAL: Cannot find device (10000:00:01.0) 00:04:30.739 EAL: Failed to attach device on primary process 00:04:30.739 passed 00:04:30.739 00:04:30.739 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.739 suites 1 1 n/a 0 0 00:04:30.739 tests 1 1 1 0 0 00:04:30.739 asserts 25 25 25 0 n/a 00:04:30.739 00:04:30.739 Elapsed time = 0.029 seconds 00:04:30.739 00:04:30.739 real 0m0.049s 00:04:30.739 user 0m0.015s 00:04:30.739 sys 0m0.033s 00:04:30.739 08:05:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.739 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:30.739 ************************************ 00:04:30.739 END TEST env_pci 00:04:30.739 ************************************ 00:04:30.739 08:05:04 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.739 08:05:04 -- env/env.sh@15 -- # uname 00:04:30.739 08:05:04 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.739 08:05:04 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.739 08:05:04 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.739 08:05:04 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:04:30.739 08:05:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:30.739 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:30.739 ************************************ 00:04:30.739 START TEST env_dpdk_post_init 00:04:30.739 ************************************ 00:04:30.739 08:05:04 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.739 EAL: Detected CPU lcores: 96 00:04:30.739 EAL: Detected NUMA nodes: 2 00:04:30.739 EAL: Detected shared linkage of DPDK 00:04:30.739 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.739 EAL: Selected IOVA mode 'VA' 00:04:30.739 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.739 EAL: VFIO support initialized 00:04:30.739 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.997 EAL: Using IOMMU type 1 (Type 1) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:30.997 EAL: Ignore mapping IO port bar(1) 00:04:30.997 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:31.932 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:31.932 EAL: Ignore mapping IO port bar(1) 00:04:31.932 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:35.214 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:35.214 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:35.214 Starting DPDK initialization... 00:04:35.214 Starting SPDK post initialization... 00:04:35.214 SPDK NVMe probe 00:04:35.214 Attaching to 0000:5e:00.0 00:04:35.214 Attached to 0000:5e:00.0 00:04:35.214 Cleaning up... 00:04:35.214 00:04:35.214 real 0m4.310s 00:04:35.214 user 0m3.258s 00:04:35.214 sys 0m0.119s 00:04:35.214 08:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.214 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.214 ************************************ 00:04:35.214 END TEST env_dpdk_post_init 00:04:35.214 ************************************ 00:04:35.214 08:05:08 -- env/env.sh@26 -- # uname 00:04:35.214 08:05:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.214 08:05:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.214 08:05:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.214 08:05:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.214 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.214 ************************************ 00:04:35.214 START TEST env_mem_callbacks 00:04:35.214 ************************************ 00:04:35.214 08:05:08 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.214 EAL: Detected CPU lcores: 96 00:04:35.214 EAL: Detected NUMA nodes: 2 00:04:35.214 EAL: Detected shared linkage of DPDK 00:04:35.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.214 EAL: Selected IOVA mode 'VA' 00:04:35.214 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.214 EAL: VFIO support initialized 00:04:35.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.214 00:04:35.214 00:04:35.214 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.214 http://cunit.sourceforge.net/ 00:04:35.214 00:04:35.214 00:04:35.214 Suite: memory 00:04:35.214 Test: test ... 00:04:35.214 register 0x200000200000 2097152 00:04:35.214 malloc 3145728 00:04:35.214 register 0x200000400000 4194304 00:04:35.214 buf 0x200000500000 len 3145728 PASSED 00:04:35.214 malloc 64 00:04:35.214 buf 0x2000004fff40 len 64 PASSED 00:04:35.214 malloc 4194304 00:04:35.214 register 0x200000800000 6291456 00:04:35.214 buf 0x200000a00000 len 4194304 PASSED 00:04:35.214 free 0x200000500000 3145728 00:04:35.215 free 0x2000004fff40 64 00:04:35.215 unregister 0x200000400000 4194304 PASSED 00:04:35.215 free 0x200000a00000 4194304 00:04:35.215 unregister 0x200000800000 6291456 PASSED 00:04:35.215 malloc 8388608 00:04:35.215 register 0x200000400000 10485760 00:04:35.215 buf 0x200000600000 len 8388608 PASSED 00:04:35.215 free 0x200000600000 8388608 00:04:35.215 unregister 0x200000400000 10485760 PASSED 00:04:35.215 passed 00:04:35.215 00:04:35.215 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.215 suites 1 1 n/a 0 0 00:04:35.215 tests 1 1 1 0 0 00:04:35.215 asserts 15 15 15 0 n/a 00:04:35.215 00:04:35.215 Elapsed time = 0.005 seconds 00:04:35.215 00:04:35.215 real 0m0.057s 00:04:35.215 user 0m0.015s 00:04:35.215 sys 0m0.042s 00:04:35.215 08:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.215 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 ************************************ 00:04:35.215 END TEST env_mem_callbacks 00:04:35.215 ************************************ 00:04:35.215 00:04:35.215 real 0m5.893s 00:04:35.215 user 0m4.141s 00:04:35.215 sys 0m0.823s 00:04:35.215 08:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.215 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 ************************************ 00:04:35.215 END TEST env 00:04:35.215 ************************************ 00:04:35.215 08:05:08 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.215 08:05:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:35.215 08:05:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.215 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.215 ************************************ 00:04:35.215 START TEST rpc 00:04:35.215 ************************************ 00:04:35.215 08:05:08 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.215 * Looking for test storage... 00:04:35.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.473 08:05:08 -- rpc/rpc.sh@65 -- # spdk_pid=2071365 00:04:35.473 08:05:08 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.473 08:05:08 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:35.473 08:05:08 -- rpc/rpc.sh@67 -- # waitforlisten 2071365 00:04:35.473 08:05:08 -- common/autotest_common.sh@817 -- # '[' -z 2071365 ']' 00:04:35.473 08:05:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.474 08:05:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.474 08:05:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.474 08:05:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.474 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:35.474 [2024-02-13 08:05:08.952784] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:35.474 [2024-02-13 08:05:08.952831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071365 ] 00:04:35.474 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.474 [2024-02-13 08:05:09.010458] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.474 [2024-02-13 08:05:09.085822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.474 [2024-02-13 08:05:09.085926] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:35.474 [2024-02-13 08:05:09.085934] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2071365' to capture a snapshot of events at runtime. 00:04:35.474 [2024-02-13 08:05:09.085941] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2071365 for offline analysis/debug. 00:04:35.474 [2024-02-13 08:05:09.085962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.455 08:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.455 08:05:09 -- common/autotest_common.sh@850 -- # return 0 00:04:36.455 08:05:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.455 08:05:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.455 08:05:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:36.455 08:05:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:36.455 08:05:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:36.455 08:05:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:36.455 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.455 ************************************ 00:04:36.455 START TEST rpc_integrity 00:04:36.455 ************************************ 00:04:36.455 08:05:09 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:36.455 08:05:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.455 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.455 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.455 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.455 08:05:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.455 08:05:09 -- rpc/rpc.sh@13 -- # jq length 00:04:36.455 08:05:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.455 08:05:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.455 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.455 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.455 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.455 08:05:09 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:36.455 08:05:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.455 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.455 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.455 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.456 { 00:04:36.456 "name": "Malloc0", 00:04:36.456 "aliases": [ 00:04:36.456 "42b8f35a-3131-4f3f-9e63-29f949dcf3c7" 00:04:36.456 ], 00:04:36.456 "product_name": "Malloc disk", 00:04:36.456 "block_size": 512, 00:04:36.456 "num_blocks": 16384, 00:04:36.456 "uuid": "42b8f35a-3131-4f3f-9e63-29f949dcf3c7", 00:04:36.456 "assigned_rate_limits": { 00:04:36.456 "rw_ios_per_sec": 0, 00:04:36.456 "rw_mbytes_per_sec": 0, 00:04:36.456 "r_mbytes_per_sec": 0, 00:04:36.456 "w_mbytes_per_sec": 0 00:04:36.456 }, 00:04:36.456 "claimed": false, 00:04:36.456 "zoned": false, 00:04:36.456 "supported_io_types": { 00:04:36.456 "read": true, 00:04:36.456 "write": true, 00:04:36.456 "unmap": true, 00:04:36.456 "write_zeroes": true, 00:04:36.456 "flush": true, 00:04:36.456 "reset": true, 00:04:36.456 "compare": false, 00:04:36.456 "compare_and_write": false, 00:04:36.456 "abort": true, 00:04:36.456 "nvme_admin": false, 00:04:36.456 "nvme_io": false 00:04:36.456 }, 00:04:36.456 "memory_domains": [ 00:04:36.456 { 00:04:36.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.456 "dma_device_type": 2 00:04:36.456 } 00:04:36.456 ], 00:04:36.456 "driver_specific": {} 00:04:36.456 } 00:04:36.456 ]' 00:04:36.456 08:05:09 -- rpc/rpc.sh@17 -- # jq length 00:04:36.456 08:05:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.456 08:05:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:36.456 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 [2024-02-13 08:05:09.879367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:36.456 [2024-02-13 08:05:09.879399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.456 [2024-02-13 08:05:09.879410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16ed2b0 00:04:36.456 [2024-02-13 08:05:09.879416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.456 [2024-02-13 08:05:09.880462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.456 [2024-02-13 08:05:09.880482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.456 Passthru0 00:04:36.456 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.456 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.456 { 00:04:36.456 "name": "Malloc0", 00:04:36.456 "aliases": [ 00:04:36.456 "42b8f35a-3131-4f3f-9e63-29f949dcf3c7" 00:04:36.456 ], 00:04:36.456 "product_name": "Malloc disk", 00:04:36.456 "block_size": 512, 00:04:36.456 "num_blocks": 16384, 00:04:36.456 "uuid": "42b8f35a-3131-4f3f-9e63-29f949dcf3c7", 00:04:36.456 "assigned_rate_limits": { 00:04:36.456 "rw_ios_per_sec": 0, 00:04:36.456 "rw_mbytes_per_sec": 0, 00:04:36.456 "r_mbytes_per_sec": 0, 00:04:36.456 "w_mbytes_per_sec": 0 00:04:36.456 }, 00:04:36.456 "claimed": true, 00:04:36.456 "claim_type": "exclusive_write", 00:04:36.456 "zoned": false, 00:04:36.456 "supported_io_types": { 00:04:36.456 "read": true, 00:04:36.456 "write": true, 00:04:36.456 "unmap": true, 00:04:36.456 "write_zeroes": true, 00:04:36.456 "flush": true, 00:04:36.456 "reset": true, 00:04:36.456 "compare": false, 00:04:36.456 "compare_and_write": false, 00:04:36.456 "abort": true, 00:04:36.456 "nvme_admin": false, 00:04:36.456 "nvme_io": false 00:04:36.456 }, 00:04:36.456 "memory_domains": [ 00:04:36.456 { 00:04:36.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.456 "dma_device_type": 2 00:04:36.456 } 00:04:36.456 ], 00:04:36.456 "driver_specific": {} 00:04:36.456 }, 00:04:36.456 { 00:04:36.456 "name": "Passthru0", 00:04:36.456 "aliases": [ 00:04:36.456 "d5a8c7d9-ebe5-5b30-9a51-e64560191969" 00:04:36.456 ], 00:04:36.456 "product_name": "passthru", 00:04:36.456 "block_size": 512, 00:04:36.456 "num_blocks": 16384, 00:04:36.456 "uuid": "d5a8c7d9-ebe5-5b30-9a51-e64560191969", 00:04:36.456 "assigned_rate_limits": { 00:04:36.456 "rw_ios_per_sec": 0, 00:04:36.456 "rw_mbytes_per_sec": 0, 00:04:36.456 "r_mbytes_per_sec": 0, 00:04:36.456 "w_mbytes_per_sec": 0 00:04:36.456 }, 00:04:36.456 "claimed": false, 00:04:36.456 "zoned": false, 00:04:36.456 "supported_io_types": { 00:04:36.456 "read": true, 00:04:36.456 "write": true, 00:04:36.456 "unmap": true, 00:04:36.456 "write_zeroes": true, 00:04:36.456 "flush": true, 00:04:36.456 "reset": true, 00:04:36.456 "compare": false, 00:04:36.456 "compare_and_write": false, 00:04:36.456 "abort": true, 00:04:36.456 "nvme_admin": false, 00:04:36.456 "nvme_io": false 00:04:36.456 }, 00:04:36.456 "memory_domains": [ 00:04:36.456 { 00:04:36.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.456 "dma_device_type": 2 00:04:36.456 } 00:04:36.456 ], 00:04:36.456 "driver_specific": { 00:04:36.456 "passthru": { 00:04:36.456 "name": "Passthru0", 00:04:36.456 "base_bdev_name": "Malloc0" 00:04:36.456 } 00:04:36.456 } 00:04:36.456 } 00:04:36.456 ]' 00:04:36.456 08:05:09 -- rpc/rpc.sh@21 -- # jq length 00:04:36.456 08:05:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.456 08:05:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.456 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:36.456 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.456 08:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.456 08:05:09 -- rpc/rpc.sh@26 -- # jq length 00:04:36.456 08:05:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.456 00:04:36.456 real 0m0.269s 00:04:36.456 user 0m0.177s 00:04:36.456 sys 0m0.031s 00:04:36.456 08:05:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 ************************************ 00:04:36.456 END TEST rpc_integrity 00:04:36.456 ************************************ 00:04:36.456 08:05:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:36.456 08:05:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:36.456 08:05:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 ************************************ 00:04:36.456 START TEST rpc_plugins 00:04:36.456 ************************************ 00:04:36.456 08:05:10 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:04:36.456 08:05:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:36.456 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:36.456 08:05:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:36.456 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:36.456 { 00:04:36.456 "name": "Malloc1", 00:04:36.456 "aliases": [ 00:04:36.456 "581998b8-ed6b-4d5c-b374-31a008f92d8b" 00:04:36.456 ], 00:04:36.456 "product_name": "Malloc disk", 00:04:36.456 "block_size": 4096, 00:04:36.456 "num_blocks": 256, 00:04:36.456 "uuid": "581998b8-ed6b-4d5c-b374-31a008f92d8b", 00:04:36.456 "assigned_rate_limits": { 00:04:36.456 "rw_ios_per_sec": 0, 00:04:36.456 "rw_mbytes_per_sec": 0, 00:04:36.456 "r_mbytes_per_sec": 0, 00:04:36.456 "w_mbytes_per_sec": 0 00:04:36.456 }, 00:04:36.456 "claimed": false, 00:04:36.456 "zoned": false, 00:04:36.456 "supported_io_types": { 00:04:36.456 "read": true, 00:04:36.456 "write": true, 00:04:36.456 "unmap": true, 00:04:36.456 "write_zeroes": true, 00:04:36.456 "flush": true, 00:04:36.456 "reset": true, 00:04:36.456 "compare": false, 00:04:36.456 "compare_and_write": false, 00:04:36.456 "abort": true, 00:04:36.456 "nvme_admin": false, 00:04:36.456 "nvme_io": false 00:04:36.456 }, 00:04:36.456 "memory_domains": [ 00:04:36.456 { 00:04:36.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.456 "dma_device_type": 2 00:04:36.456 } 00:04:36.456 ], 00:04:36.456 "driver_specific": {} 00:04:36.456 } 00:04:36.456 ]' 00:04:36.456 08:05:10 -- rpc/rpc.sh@32 -- # jq length 00:04:36.456 08:05:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:36.456 08:05:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:36.456 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.456 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.456 08:05:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:36.456 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.456 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.715 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.715 08:05:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:36.715 08:05:10 -- rpc/rpc.sh@36 -- # jq length 00:04:36.715 08:05:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:36.715 00:04:36.715 real 0m0.136s 00:04:36.715 user 0m0.083s 00:04:36.715 sys 0m0.018s 00:04:36.715 08:05:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.715 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.715 ************************************ 00:04:36.715 END TEST rpc_plugins 00:04:36.715 ************************************ 00:04:36.715 08:05:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:36.715 08:05:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:36.715 08:05:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:36.715 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.715 ************************************ 00:04:36.715 START TEST rpc_trace_cmd_test 00:04:36.715 ************************************ 00:04:36.715 08:05:10 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:04:36.715 08:05:10 -- rpc/rpc.sh@40 -- # local info 00:04:36.715 08:05:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:36.715 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.715 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.715 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.715 08:05:10 -- rpc/rpc.sh@42 -- # info='{ 00:04:36.715 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2071365", 00:04:36.715 "tpoint_group_mask": "0x8", 00:04:36.715 "iscsi_conn": { 00:04:36.715 "mask": "0x2", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "scsi": { 00:04:36.715 "mask": "0x4", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "bdev": { 00:04:36.715 "mask": "0x8", 00:04:36.715 "tpoint_mask": "0xffffffffffffffff" 00:04:36.715 }, 00:04:36.715 "nvmf_rdma": { 00:04:36.715 "mask": "0x10", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "nvmf_tcp": { 00:04:36.715 "mask": "0x20", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "ftl": { 00:04:36.715 "mask": "0x40", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "blobfs": { 00:04:36.715 "mask": "0x80", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "dsa": { 00:04:36.715 "mask": "0x200", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "thread": { 00:04:36.715 "mask": "0x400", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "nvme_pcie": { 00:04:36.715 "mask": "0x800", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "iaa": { 00:04:36.715 "mask": "0x1000", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "nvme_tcp": { 00:04:36.715 "mask": "0x2000", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 }, 00:04:36.715 "bdev_nvme": { 00:04:36.715 "mask": "0x4000", 00:04:36.715 "tpoint_mask": "0x0" 00:04:36.715 } 00:04:36.715 }' 00:04:36.715 08:05:10 -- rpc/rpc.sh@43 -- # jq length 00:04:36.715 08:05:10 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:36.715 08:05:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.715 08:05:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.715 08:05:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.715 08:05:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.715 08:05:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.973 08:05:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.973 08:05:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.973 08:05:10 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.973 00:04:36.973 real 0m0.206s 00:04:36.973 user 0m0.177s 00:04:36.973 sys 0m0.022s 00:04:36.973 08:05:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 ************************************ 00:04:36.973 END TEST rpc_trace_cmd_test 00:04:36.973 ************************************ 00:04:36.973 08:05:10 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.973 08:05:10 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.973 08:05:10 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.973 08:05:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:36.973 08:05:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 ************************************ 00:04:36.973 START TEST rpc_daemon_integrity 00:04:36.973 ************************************ 00:04:36.973 08:05:10 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:04:36.973 08:05:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.973 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.973 08:05:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.973 08:05:10 -- rpc/rpc.sh@13 -- # jq length 00:04:36.973 08:05:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.973 08:05:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.973 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.973 08:05:10 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.973 08:05:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.973 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.973 08:05:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.973 { 00:04:36.973 "name": "Malloc2", 00:04:36.973 "aliases": [ 00:04:36.973 "4b5d7bac-3d56-42f2-8848-ff9a81ed4b4f" 00:04:36.973 ], 00:04:36.973 "product_name": "Malloc disk", 00:04:36.973 "block_size": 512, 00:04:36.973 "num_blocks": 16384, 00:04:36.973 "uuid": "4b5d7bac-3d56-42f2-8848-ff9a81ed4b4f", 00:04:36.973 "assigned_rate_limits": { 00:04:36.973 "rw_ios_per_sec": 0, 00:04:36.973 "rw_mbytes_per_sec": 0, 00:04:36.973 "r_mbytes_per_sec": 0, 00:04:36.973 "w_mbytes_per_sec": 0 00:04:36.973 }, 00:04:36.973 "claimed": false, 00:04:36.973 "zoned": false, 00:04:36.973 "supported_io_types": { 00:04:36.973 "read": true, 00:04:36.973 "write": true, 00:04:36.973 "unmap": true, 00:04:36.973 "write_zeroes": true, 00:04:36.973 "flush": true, 00:04:36.973 "reset": true, 00:04:36.973 "compare": false, 00:04:36.973 "compare_and_write": false, 00:04:36.973 "abort": true, 00:04:36.973 "nvme_admin": false, 00:04:36.973 "nvme_io": false 00:04:36.973 }, 00:04:36.973 "memory_domains": [ 00:04:36.973 { 00:04:36.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.973 "dma_device_type": 2 00:04:36.973 } 00:04:36.973 ], 00:04:36.973 "driver_specific": {} 00:04:36.973 } 00:04:36.973 ]' 00:04:36.973 08:05:10 -- rpc/rpc.sh@17 -- # jq length 00:04:36.973 08:05:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.973 08:05:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.973 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.973 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.973 [2024-02-13 08:05:10.597306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.973 [2024-02-13 08:05:10.597333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.974 [2024-02-13 08:05:10.597347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16ecdc0 00:04:36.974 [2024-02-13 08:05:10.597357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.974 [2024-02-13 08:05:10.598309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.974 [2024-02-13 08:05:10.598328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.974 Passthru0 00:04:36.974 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.974 08:05:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.974 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.974 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.974 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.974 08:05:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.974 { 00:04:36.974 "name": "Malloc2", 00:04:36.974 "aliases": [ 00:04:36.974 "4b5d7bac-3d56-42f2-8848-ff9a81ed4b4f" 00:04:36.974 ], 00:04:36.974 "product_name": "Malloc disk", 00:04:36.974 "block_size": 512, 00:04:36.974 "num_blocks": 16384, 00:04:36.974 "uuid": "4b5d7bac-3d56-42f2-8848-ff9a81ed4b4f", 00:04:36.974 "assigned_rate_limits": { 00:04:36.974 "rw_ios_per_sec": 0, 00:04:36.974 "rw_mbytes_per_sec": 0, 00:04:36.974 "r_mbytes_per_sec": 0, 00:04:36.974 "w_mbytes_per_sec": 0 00:04:36.974 }, 00:04:36.974 "claimed": true, 00:04:36.974 "claim_type": "exclusive_write", 00:04:36.974 "zoned": false, 00:04:36.974 "supported_io_types": { 00:04:36.974 "read": true, 00:04:36.974 "write": true, 00:04:36.974 "unmap": true, 00:04:36.974 "write_zeroes": true, 00:04:36.974 "flush": true, 00:04:36.974 "reset": true, 00:04:36.974 "compare": false, 00:04:36.974 "compare_and_write": false, 00:04:36.974 "abort": true, 00:04:36.974 "nvme_admin": false, 00:04:36.974 "nvme_io": false 00:04:36.974 }, 00:04:36.974 "memory_domains": [ 00:04:36.974 { 00:04:36.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.974 "dma_device_type": 2 00:04:36.974 } 00:04:36.974 ], 00:04:36.974 "driver_specific": {} 00:04:36.974 }, 00:04:36.974 { 00:04:36.974 "name": "Passthru0", 00:04:36.974 "aliases": [ 00:04:36.974 "50eb1b74-be7f-5826-8839-760dfd5126ab" 00:04:36.974 ], 00:04:36.974 "product_name": "passthru", 00:04:36.974 "block_size": 512, 00:04:36.974 "num_blocks": 16384, 00:04:36.974 "uuid": "50eb1b74-be7f-5826-8839-760dfd5126ab", 00:04:36.974 "assigned_rate_limits": { 00:04:36.974 "rw_ios_per_sec": 0, 00:04:36.974 "rw_mbytes_per_sec": 0, 00:04:36.974 "r_mbytes_per_sec": 0, 00:04:36.974 "w_mbytes_per_sec": 0 00:04:36.974 }, 00:04:36.974 "claimed": false, 00:04:36.974 "zoned": false, 00:04:36.974 "supported_io_types": { 00:04:36.974 "read": true, 00:04:36.974 "write": true, 00:04:36.974 "unmap": true, 00:04:36.974 "write_zeroes": true, 00:04:36.974 "flush": true, 00:04:36.974 "reset": true, 00:04:36.974 "compare": false, 00:04:36.974 "compare_and_write": false, 00:04:36.974 "abort": true, 00:04:36.974 "nvme_admin": false, 00:04:36.974 "nvme_io": false 00:04:36.974 }, 00:04:36.974 "memory_domains": [ 00:04:36.974 { 00:04:36.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.974 "dma_device_type": 2 00:04:36.974 } 00:04:36.974 ], 00:04:36.974 "driver_specific": { 00:04:36.974 "passthru": { 00:04:36.974 "name": "Passthru0", 00:04:36.974 "base_bdev_name": "Malloc2" 00:04:36.974 } 00:04:36.974 } 00:04:36.974 } 00:04:36.974 ]' 00:04:36.974 08:05:10 -- rpc/rpc.sh@21 -- # jq length 00:04:37.232 08:05:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.232 08:05:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.232 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.232 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.232 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.232 08:05:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.232 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.232 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.232 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.232 08:05:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.232 08:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.232 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.232 08:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.232 08:05:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.232 08:05:10 -- rpc/rpc.sh@26 -- # jq length 00:04:37.232 08:05:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.232 00:04:37.232 real 0m0.269s 00:04:37.232 user 0m0.176s 00:04:37.232 sys 0m0.032s 00:04:37.232 08:05:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.232 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:37.232 ************************************ 00:04:37.232 END TEST rpc_daemon_integrity 00:04:37.232 ************************************ 00:04:37.232 08:05:10 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.232 08:05:10 -- rpc/rpc.sh@84 -- # killprocess 2071365 00:04:37.232 08:05:10 -- common/autotest_common.sh@924 -- # '[' -z 2071365 ']' 00:04:37.232 08:05:10 -- common/autotest_common.sh@928 -- # kill -0 2071365 00:04:37.232 08:05:10 -- common/autotest_common.sh@929 -- # uname 00:04:37.232 08:05:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:37.232 08:05:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2071365 00:04:37.232 08:05:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:37.232 08:05:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:37.232 08:05:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2071365' 00:04:37.232 killing process with pid 2071365 00:04:37.232 08:05:10 -- common/autotest_common.sh@943 -- # kill 2071365 00:04:37.232 08:05:10 -- common/autotest_common.sh@948 -- # wait 2071365 00:04:37.490 00:04:37.491 real 0m2.315s 00:04:37.491 user 0m2.964s 00:04:37.491 sys 0m0.587s 00:04:37.491 08:05:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.491 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.491 ************************************ 00:04:37.491 END TEST rpc 00:04:37.491 ************************************ 00:04:37.491 08:05:11 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.491 08:05:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.491 08:05:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.491 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 ************************************ 00:04:37.750 START TEST rpc_client 00:04:37.750 ************************************ 00:04:37.750 08:05:11 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:37.750 * Looking for test storage... 00:04:37.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:37.750 08:05:11 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:37.750 OK 00:04:37.750 08:05:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.750 00:04:37.750 real 0m0.106s 00:04:37.750 user 0m0.052s 00:04:37.750 sys 0m0.061s 00:04:37.750 08:05:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.750 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 ************************************ 00:04:37.750 END TEST rpc_client 00:04:37.750 ************************************ 00:04:37.750 08:05:11 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.750 08:05:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:37.750 08:05:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:37.750 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 ************************************ 00:04:37.750 START TEST json_config 00:04:37.750 ************************************ 00:04:37.750 08:05:11 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:37.750 08:05:11 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.750 08:05:11 -- nvmf/common.sh@7 -- # uname -s 00:04:37.750 08:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.750 08:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.750 08:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.750 08:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.750 08:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.750 08:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.750 08:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.750 08:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.750 08:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.750 08:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.750 08:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:37.750 08:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:37.750 08:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.750 08:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.750 08:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.750 08:05:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.750 08:05:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.750 08:05:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.750 08:05:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.750 08:05:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.750 08:05:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.750 08:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.750 08:05:11 -- paths/export.sh@5 -- # export PATH 00:04:37.750 08:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.750 08:05:11 -- nvmf/common.sh@46 -- # : 0 00:04:37.750 08:05:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:37.750 08:05:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:37.750 08:05:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:37.750 08:05:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.750 08:05:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.750 08:05:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:37.750 08:05:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:37.750 08:05:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:37.750 08:05:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.750 08:05:11 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:37.750 08:05:11 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:37.750 08:05:11 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:37.750 08:05:11 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:37.750 08:05:11 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:37.750 08:05:11 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:37.750 08:05:11 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:37.750 08:05:11 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:37.750 08:05:11 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:37.750 08:05:11 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.750 08:05:11 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:37.750 INFO: JSON configuration test init 00:04:37.750 08:05:11 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:37.750 08:05:11 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:37.750 08:05:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:37.750 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 08:05:11 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:37.750 08:05:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:37.750 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 08:05:11 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:37.750 08:05:11 -- json_config/json_config.sh@98 -- # local app=target 00:04:37.750 08:05:11 -- json_config/json_config.sh@99 -- # shift 00:04:37.750 08:05:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:37.750 08:05:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:37.750 08:05:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=2072025 00:04:37.750 08:05:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:37.750 Waiting for target to run... 00:04:37.750 08:05:11 -- json_config/json_config.sh@114 -- # waitforlisten 2072025 /var/tmp/spdk_tgt.sock 00:04:37.750 08:05:11 -- common/autotest_common.sh@817 -- # '[' -z 2072025 ']' 00:04:37.750 08:05:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.750 08:05:11 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:37.750 08:05:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.750 08:05:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.750 08:05:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.750 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:38.009 [2024-02-13 08:05:11.472623] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:38.009 [2024-02-13 08:05:11.472677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072025 ] 00:04:38.009 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.267 [2024-02-13 08:05:11.903143] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.525 [2024-02-13 08:05:11.987663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.525 [2024-02-13 08:05:11.987764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.783 08:05:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.783 08:05:12 -- common/autotest_common.sh@850 -- # return 0 00:04:38.783 08:05:12 -- json_config/json_config.sh@115 -- # echo '' 00:04:38.783 00:04:38.783 08:05:12 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:38.783 08:05:12 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:38.783 08:05:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:38.783 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:38.783 08:05:12 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:38.783 08:05:12 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:38.783 08:05:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:38.783 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:04:38.783 08:05:12 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.783 08:05:12 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:38.783 08:05:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.066 08:05:15 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:42.066 08:05:15 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:42.066 08:05:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.066 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:42.066 08:05:15 -- json_config/json_config.sh@48 -- # local ret=0 00:04:42.066 08:05:15 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.066 08:05:15 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:42.066 08:05:15 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.066 08:05:15 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.066 08:05:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.066 08:05:15 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:42.066 08:05:15 -- json_config/json_config.sh@51 -- # local get_types 00:04:42.066 08:05:15 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:42.066 08:05:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:42.066 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:42.066 08:05:15 -- json_config/json_config.sh@58 -- # return 0 00:04:42.066 08:05:15 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:42.066 08:05:15 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:42.066 08:05:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.066 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:42.066 08:05:15 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:42.066 08:05:15 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:42.066 08:05:15 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.066 08:05:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.066 MallocForNvmf0 00:04:42.066 08:05:15 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.066 08:05:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.324 MallocForNvmf1 00:04:42.324 08:05:15 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.324 08:05:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.582 [2024-02-13 08:05:16.027299] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.582 08:05:16 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.582 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.582 08:05:16 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.582 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.840 08:05:16 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.840 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.840 08:05:16 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.840 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.097 [2024-02-13 08:05:16.649228] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.097 08:05:16 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:43.097 08:05:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.097 08:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 08:05:16 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:43.097 08:05:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.097 08:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.097 08:05:16 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:43.097 08:05:16 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.097 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.355 MallocBdevForConfigChangeCheck 00:04:43.355 08:05:16 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:43.355 08:05:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.355 08:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 08:05:16 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:43.355 08:05:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.613 08:05:17 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:43.613 INFO: shutting down applications... 00:04:43.613 08:05:17 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:43.613 08:05:17 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:43.613 08:05:17 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:43.613 08:05:17 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:45.508 Calling clear_iscsi_subsystem 00:04:45.508 Calling clear_nvmf_subsystem 00:04:45.508 Calling clear_nbd_subsystem 00:04:45.508 Calling clear_ublk_subsystem 00:04:45.508 Calling clear_vhost_blk_subsystem 00:04:45.508 Calling clear_vhost_scsi_subsystem 00:04:45.508 Calling clear_scheduler_subsystem 00:04:45.508 Calling clear_bdev_subsystem 00:04:45.508 Calling clear_accel_subsystem 00:04:45.508 Calling clear_vmd_subsystem 00:04:45.508 Calling clear_sock_subsystem 00:04:45.508 Calling clear_iobuf_subsystem 00:04:45.508 08:05:18 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:45.508 08:05:18 -- json_config/json_config.sh@396 -- # count=100 00:04:45.508 08:05:18 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:45.508 08:05:18 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.508 08:05:18 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.508 08:05:18 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:45.508 08:05:19 -- json_config/json_config.sh@398 -- # break 00:04:45.508 08:05:19 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:45.508 08:05:19 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:45.508 08:05:19 -- json_config/json_config.sh@120 -- # local app=target 00:04:45.508 08:05:19 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:45.508 08:05:19 -- json_config/json_config.sh@124 -- # [[ -n 2072025 ]] 00:04:45.508 08:05:19 -- json_config/json_config.sh@127 -- # kill -SIGINT 2072025 00:04:45.508 08:05:19 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:45.508 08:05:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:45.508 08:05:19 -- json_config/json_config.sh@130 -- # kill -0 2072025 00:04:45.508 08:05:19 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:46.076 08:05:19 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:46.076 08:05:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:46.076 08:05:19 -- json_config/json_config.sh@130 -- # kill -0 2072025 00:04:46.076 08:05:19 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:46.076 08:05:19 -- json_config/json_config.sh@132 -- # break 00:04:46.076 08:05:19 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:46.076 08:05:19 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:46.076 SPDK target shutdown done 00:04:46.076 08:05:19 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:46.076 INFO: relaunching applications... 00:04:46.076 08:05:19 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.076 08:05:19 -- json_config/json_config.sh@98 -- # local app=target 00:04:46.076 08:05:19 -- json_config/json_config.sh@99 -- # shift 00:04:46.076 08:05:19 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:46.076 08:05:19 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:46.076 08:05:19 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:46.076 08:05:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.076 08:05:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:46.076 08:05:19 -- json_config/json_config.sh@111 -- # app_pid[$app]=2073534 00:04:46.076 08:05:19 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:46.076 Waiting for target to run... 00:04:46.076 08:05:19 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.076 08:05:19 -- json_config/json_config.sh@114 -- # waitforlisten 2073534 /var/tmp/spdk_tgt.sock 00:04:46.076 08:05:19 -- common/autotest_common.sh@817 -- # '[' -z 2073534 ']' 00:04:46.076 08:05:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.076 08:05:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.076 08:05:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.076 08:05:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.076 08:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:46.076 [2024-02-13 08:05:19.589496] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:46.076 [2024-02-13 08:05:19.589551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073534 ] 00:04:46.076 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.643 [2024-02-13 08:05:20.029760] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.643 [2024-02-13 08:05:20.106620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.643 [2024-02-13 08:05:20.106732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.643 [2024-02-13 08:05:20.106758] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:49.925 [2024-02-13 08:05:23.113462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.925 [2024-02-13 08:05:23.145729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.182 08:05:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.182 08:05:23 -- common/autotest_common.sh@850 -- # return 0 00:04:50.182 08:05:23 -- json_config/json_config.sh@115 -- # echo '' 00:04:50.182 00:04:50.182 08:05:23 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:50.182 08:05:23 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:50.182 INFO: Checking if target configuration is the same... 00:04:50.182 08:05:23 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.182 08:05:23 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:50.182 08:05:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.182 + '[' 2 -ne 2 ']' 00:04:50.182 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:50.182 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:50.182 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.182 +++ basename /dev/fd/62 00:04:50.182 ++ mktemp /tmp/62.XXX 00:04:50.182 + tmp_file_1=/tmp/62.Yn6 00:04:50.182 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.182 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.182 + tmp_file_2=/tmp/spdk_tgt_config.json.2xC 00:04:50.182 + ret=0 00:04:50.182 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.440 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.440 + diff -u /tmp/62.Yn6 /tmp/spdk_tgt_config.json.2xC 00:04:50.440 + echo 'INFO: JSON config files are the same' 00:04:50.440 INFO: JSON config files are the same 00:04:50.440 + rm /tmp/62.Yn6 /tmp/spdk_tgt_config.json.2xC 00:04:50.440 + exit 0 00:04:50.440 08:05:24 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:50.440 08:05:24 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:50.440 INFO: changing configuration and checking if this can be detected... 00:04:50.440 08:05:24 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.440 08:05:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.698 08:05:24 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.698 08:05:24 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:50.698 08:05:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.698 + '[' 2 -ne 2 ']' 00:04:50.698 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:50.698 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:50.698 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.698 +++ basename /dev/fd/62 00:04:50.698 ++ mktemp /tmp/62.XXX 00:04:50.698 + tmp_file_1=/tmp/62.ZMF 00:04:50.698 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.698 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.698 + tmp_file_2=/tmp/spdk_tgt_config.json.5b4 00:04:50.698 + ret=0 00:04:50.698 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.957 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.957 + diff -u /tmp/62.ZMF /tmp/spdk_tgt_config.json.5b4 00:04:50.957 + ret=1 00:04:50.957 + echo '=== Start of file: /tmp/62.ZMF ===' 00:04:50.957 + cat /tmp/62.ZMF 00:04:50.957 + echo '=== End of file: /tmp/62.ZMF ===' 00:04:50.957 + echo '' 00:04:50.957 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5b4 ===' 00:04:50.957 + cat /tmp/spdk_tgt_config.json.5b4 00:04:50.957 + echo '=== End of file: /tmp/spdk_tgt_config.json.5b4 ===' 00:04:50.957 + echo '' 00:04:50.957 + rm /tmp/62.ZMF /tmp/spdk_tgt_config.json.5b4 00:04:50.957 + exit 1 00:04:50.957 08:05:24 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:50.957 INFO: configuration change detected. 00:04:50.957 08:05:24 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:50.957 08:05:24 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:50.957 08:05:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:50.957 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.957 08:05:24 -- json_config/json_config.sh@360 -- # local ret=0 00:04:50.957 08:05:24 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:50.957 08:05:24 -- json_config/json_config.sh@370 -- # [[ -n 2073534 ]] 00:04:50.957 08:05:24 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:50.957 08:05:24 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:50.957 08:05:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:50.957 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.957 08:05:24 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:50.957 08:05:24 -- json_config/json_config.sh@246 -- # uname -s 00:04:50.957 08:05:24 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:50.957 08:05:24 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:50.957 08:05:24 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:50.957 08:05:24 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:50.957 08:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:50.957 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.957 08:05:24 -- json_config/json_config.sh@376 -- # killprocess 2073534 00:04:50.957 08:05:24 -- common/autotest_common.sh@924 -- # '[' -z 2073534 ']' 00:04:50.957 08:05:24 -- common/autotest_common.sh@928 -- # kill -0 2073534 00:04:50.957 08:05:24 -- common/autotest_common.sh@929 -- # uname 00:04:50.957 08:05:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:50.957 08:05:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2073534 00:04:50.957 08:05:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:50.957 08:05:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:50.957 08:05:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2073534' 00:04:50.957 killing process with pid 2073534 00:04:50.957 08:05:24 -- common/autotest_common.sh@943 -- # kill 2073534 00:04:50.957 [2024-02-13 08:05:24.594374] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:50.957 08:05:24 -- common/autotest_common.sh@948 -- # wait 2073534 00:04:52.857 08:05:26 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.857 08:05:26 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:52.857 08:05:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:52.857 08:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:52.857 08:05:26 -- json_config/json_config.sh@381 -- # return 0 00:04:52.857 08:05:26 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:52.857 INFO: Success 00:04:52.857 00:04:52.857 real 0m14.828s 00:04:52.857 user 0m15.686s 00:04:52.857 sys 0m2.040s 00:04:52.858 08:05:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.858 08:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:52.858 ************************************ 00:04:52.858 END TEST json_config 00:04:52.858 ************************************ 00:04:52.858 08:05:26 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:52.858 08:05:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:52.858 08:05:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:52.858 08:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:52.858 ************************************ 00:04:52.858 START TEST json_config_extra_key 00:04:52.858 ************************************ 00:04:52.858 08:05:26 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.858 08:05:26 -- nvmf/common.sh@7 -- # uname -s 00:04:52.858 08:05:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.858 08:05:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.858 08:05:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.858 08:05:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.858 08:05:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.858 08:05:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.858 08:05:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.858 08:05:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.858 08:05:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.858 08:05:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.858 08:05:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:52.858 08:05:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:52.858 08:05:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.858 08:05:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.858 08:05:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.858 08:05:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:52.858 08:05:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.858 08:05:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.858 08:05:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.858 08:05:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.858 08:05:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.858 08:05:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.858 08:05:26 -- paths/export.sh@5 -- # export PATH 00:04:52.858 08:05:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.858 08:05:26 -- nvmf/common.sh@46 -- # : 0 00:04:52.858 08:05:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:52.858 08:05:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:52.858 08:05:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:52.858 08:05:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.858 08:05:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.858 08:05:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:52.858 08:05:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:52.858 08:05:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:52.858 INFO: launching applications... 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2074810 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:52.858 Waiting for target to run... 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2074810 /var/tmp/spdk_tgt.sock 00:04:52.858 08:05:26 -- common/autotest_common.sh@817 -- # '[' -z 2074810 ']' 00:04:52.858 08:05:26 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:52.858 08:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.858 08:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.858 08:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.858 08:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.858 08:05:26 -- common/autotest_common.sh@10 -- # set +x 00:04:52.858 [2024-02-13 08:05:26.326458] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:52.858 [2024-02-13 08:05:26.326505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074810 ] 00:04:52.858 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.116 [2024-02-13 08:05:26.765526] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.374 [2024-02-13 08:05:26.853694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.374 [2024-02-13 08:05:26.853792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.374 [2024-02-13 08:05:26.853812] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:53.631 08:05:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:53.631 08:05:27 -- common/autotest_common.sh@850 -- # return 0 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:53.631 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:53.631 INFO: shutting down applications... 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2074810 ]] 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2074810 00:04:53.631 [2024-02-13 08:05:27.112882] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2074810 00:04:53.631 08:05:27 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2074810 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:54.223 SPDK target shutdown done 00:04:54.223 08:05:27 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:54.223 Success 00:04:54.223 00:04:54.223 real 0m1.426s 00:04:54.223 user 0m1.051s 00:04:54.223 sys 0m0.543s 00:04:54.223 08:05:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.223 08:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.223 ************************************ 00:04:54.223 END TEST json_config_extra_key 00:04:54.223 ************************************ 00:04:54.223 08:05:27 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.223 08:05:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:54.223 08:05:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:54.223 08:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.223 ************************************ 00:04:54.223 START TEST alias_rpc 00:04:54.223 ************************************ 00:04:54.223 08:05:27 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.223 * Looking for test storage... 00:04:54.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:54.223 08:05:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.223 08:05:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2075092 00:04:54.223 08:05:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2075092 00:04:54.223 08:05:27 -- common/autotest_common.sh@817 -- # '[' -z 2075092 ']' 00:04:54.223 08:05:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.223 08:05:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.223 08:05:27 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.223 08:05:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.223 08:05:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.223 08:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:54.223 [2024-02-13 08:05:27.787981] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:54.223 [2024-02-13 08:05:27.788033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075092 ] 00:04:54.223 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.223 [2024-02-13 08:05:27.847024] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.493 [2024-02-13 08:05:27.923958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.493 [2024-02-13 08:05:27.924072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.062 08:05:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.062 08:05:28 -- common/autotest_common.sh@850 -- # return 0 00:04:55.062 08:05:28 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:55.062 08:05:28 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2075092 00:04:55.062 08:05:28 -- common/autotest_common.sh@924 -- # '[' -z 2075092 ']' 00:04:55.062 08:05:28 -- common/autotest_common.sh@928 -- # kill -0 2075092 00:04:55.062 08:05:28 -- common/autotest_common.sh@929 -- # uname 00:04:55.062 08:05:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:55.062 08:05:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2075092 00:04:55.321 08:05:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:55.321 08:05:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:55.321 08:05:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2075092' 00:04:55.321 killing process with pid 2075092 00:04:55.321 08:05:28 -- common/autotest_common.sh@943 -- # kill 2075092 00:04:55.321 08:05:28 -- common/autotest_common.sh@948 -- # wait 2075092 00:04:55.581 00:04:55.581 real 0m1.443s 00:04:55.581 user 0m1.553s 00:04:55.581 sys 0m0.370s 00:04:55.581 08:05:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.581 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.581 ************************************ 00:04:55.581 END TEST alias_rpc 00:04:55.581 ************************************ 00:04:55.581 08:05:29 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:55.581 08:05:29 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.581 08:05:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:55.581 08:05:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:55.581 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.581 ************************************ 00:04:55.581 START TEST spdkcli_tcp 00:04:55.581 ************************************ 00:04:55.581 08:05:29 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.581 * Looking for test storage... 00:04:55.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:55.581 08:05:29 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:55.581 08:05:29 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.581 08:05:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:55.581 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2075375 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@27 -- # waitforlisten 2075375 00:04:55.581 08:05:29 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.581 08:05:29 -- common/autotest_common.sh@817 -- # '[' -z 2075375 ']' 00:04:55.581 08:05:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.581 08:05:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:55.581 08:05:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.581 08:05:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:55.581 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.581 [2024-02-13 08:05:29.266632] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:55.581 [2024-02-13 08:05:29.266687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075375 ] 00:04:55.841 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.841 [2024-02-13 08:05:29.324681] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.841 [2024-02-13 08:05:29.400387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.841 [2024-02-13 08:05:29.400524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.841 [2024-02-13 08:05:29.400527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.410 08:05:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.410 08:05:30 -- common/autotest_common.sh@850 -- # return 0 00:04:56.410 08:05:30 -- spdkcli/tcp.sh@31 -- # socat_pid=2075452 00:04:56.410 08:05:30 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.410 08:05:30 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.670 [ 00:04:56.670 "bdev_malloc_delete", 00:04:56.670 "bdev_malloc_create", 00:04:56.670 "bdev_null_resize", 00:04:56.670 "bdev_null_delete", 00:04:56.670 "bdev_null_create", 00:04:56.670 "bdev_nvme_cuse_unregister", 00:04:56.670 "bdev_nvme_cuse_register", 00:04:56.670 "bdev_opal_new_user", 00:04:56.670 "bdev_opal_set_lock_state", 00:04:56.670 "bdev_opal_delete", 00:04:56.670 "bdev_opal_get_info", 00:04:56.670 "bdev_opal_create", 00:04:56.670 "bdev_nvme_opal_revert", 00:04:56.670 "bdev_nvme_opal_init", 00:04:56.670 "bdev_nvme_send_cmd", 00:04:56.670 "bdev_nvme_get_path_iostat", 00:04:56.670 "bdev_nvme_get_mdns_discovery_info", 00:04:56.670 "bdev_nvme_stop_mdns_discovery", 00:04:56.670 "bdev_nvme_start_mdns_discovery", 00:04:56.670 "bdev_nvme_set_multipath_policy", 00:04:56.670 "bdev_nvme_set_preferred_path", 00:04:56.670 "bdev_nvme_get_io_paths", 00:04:56.670 "bdev_nvme_remove_error_injection", 00:04:56.670 "bdev_nvme_add_error_injection", 00:04:56.670 "bdev_nvme_get_discovery_info", 00:04:56.670 "bdev_nvme_stop_discovery", 00:04:56.670 "bdev_nvme_start_discovery", 00:04:56.670 "bdev_nvme_get_controller_health_info", 00:04:56.670 "bdev_nvme_disable_controller", 00:04:56.670 "bdev_nvme_enable_controller", 00:04:56.670 "bdev_nvme_reset_controller", 00:04:56.670 "bdev_nvme_get_transport_statistics", 00:04:56.670 "bdev_nvme_apply_firmware", 00:04:56.670 "bdev_nvme_detach_controller", 00:04:56.670 "bdev_nvme_get_controllers", 00:04:56.670 "bdev_nvme_attach_controller", 00:04:56.670 "bdev_nvme_set_hotplug", 00:04:56.670 "bdev_nvme_set_options", 00:04:56.670 "bdev_passthru_delete", 00:04:56.670 "bdev_passthru_create", 00:04:56.670 "bdev_lvol_grow_lvstore", 00:04:56.670 "bdev_lvol_get_lvols", 00:04:56.670 "bdev_lvol_get_lvstores", 00:04:56.670 "bdev_lvol_delete", 00:04:56.670 "bdev_lvol_set_read_only", 00:04:56.670 "bdev_lvol_resize", 00:04:56.670 "bdev_lvol_decouple_parent", 00:04:56.670 "bdev_lvol_inflate", 00:04:56.670 "bdev_lvol_rename", 00:04:56.670 "bdev_lvol_clone_bdev", 00:04:56.670 "bdev_lvol_clone", 00:04:56.670 "bdev_lvol_snapshot", 00:04:56.670 "bdev_lvol_create", 00:04:56.670 "bdev_lvol_delete_lvstore", 00:04:56.670 "bdev_lvol_rename_lvstore", 00:04:56.670 "bdev_lvol_create_lvstore", 00:04:56.670 "bdev_raid_set_options", 00:04:56.670 "bdev_raid_remove_base_bdev", 00:04:56.670 "bdev_raid_add_base_bdev", 00:04:56.670 "bdev_raid_delete", 00:04:56.670 "bdev_raid_create", 00:04:56.670 "bdev_raid_get_bdevs", 00:04:56.670 "bdev_error_inject_error", 00:04:56.670 "bdev_error_delete", 00:04:56.670 "bdev_error_create", 00:04:56.670 "bdev_split_delete", 00:04:56.670 "bdev_split_create", 00:04:56.670 "bdev_delay_delete", 00:04:56.670 "bdev_delay_create", 00:04:56.670 "bdev_delay_update_latency", 00:04:56.670 "bdev_zone_block_delete", 00:04:56.670 "bdev_zone_block_create", 00:04:56.670 "blobfs_create", 00:04:56.670 "blobfs_detect", 00:04:56.670 "blobfs_set_cache_size", 00:04:56.670 "bdev_aio_delete", 00:04:56.670 "bdev_aio_rescan", 00:04:56.670 "bdev_aio_create", 00:04:56.670 "bdev_ftl_set_property", 00:04:56.670 "bdev_ftl_get_properties", 00:04:56.670 "bdev_ftl_get_stats", 00:04:56.670 "bdev_ftl_unmap", 00:04:56.670 "bdev_ftl_unload", 00:04:56.670 "bdev_ftl_delete", 00:04:56.670 "bdev_ftl_load", 00:04:56.670 "bdev_ftl_create", 00:04:56.670 "bdev_virtio_attach_controller", 00:04:56.670 "bdev_virtio_scsi_get_devices", 00:04:56.670 "bdev_virtio_detach_controller", 00:04:56.670 "bdev_virtio_blk_set_hotplug", 00:04:56.670 "bdev_iscsi_delete", 00:04:56.670 "bdev_iscsi_create", 00:04:56.670 "bdev_iscsi_set_options", 00:04:56.670 "accel_error_inject_error", 00:04:56.670 "ioat_scan_accel_module", 00:04:56.671 "dsa_scan_accel_module", 00:04:56.671 "iaa_scan_accel_module", 00:04:56.671 "iscsi_set_options", 00:04:56.671 "iscsi_get_auth_groups", 00:04:56.671 "iscsi_auth_group_remove_secret", 00:04:56.671 "iscsi_auth_group_add_secret", 00:04:56.671 "iscsi_delete_auth_group", 00:04:56.671 "iscsi_create_auth_group", 00:04:56.671 "iscsi_set_discovery_auth", 00:04:56.671 "iscsi_get_options", 00:04:56.671 "iscsi_target_node_request_logout", 00:04:56.671 "iscsi_target_node_set_redirect", 00:04:56.671 "iscsi_target_node_set_auth", 00:04:56.671 "iscsi_target_node_add_lun", 00:04:56.671 "iscsi_get_connections", 00:04:56.671 "iscsi_portal_group_set_auth", 00:04:56.671 "iscsi_start_portal_group", 00:04:56.671 "iscsi_delete_portal_group", 00:04:56.671 "iscsi_create_portal_group", 00:04:56.671 "iscsi_get_portal_groups", 00:04:56.671 "iscsi_delete_target_node", 00:04:56.671 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.671 "iscsi_target_node_add_pg_ig_maps", 00:04:56.671 "iscsi_create_target_node", 00:04:56.671 "iscsi_get_target_nodes", 00:04:56.671 "iscsi_delete_initiator_group", 00:04:56.671 "iscsi_initiator_group_remove_initiators", 00:04:56.671 "iscsi_initiator_group_add_initiators", 00:04:56.671 "iscsi_create_initiator_group", 00:04:56.671 "iscsi_get_initiator_groups", 00:04:56.671 "nvmf_set_crdt", 00:04:56.671 "nvmf_set_config", 00:04:56.671 "nvmf_set_max_subsystems", 00:04:56.671 "nvmf_subsystem_get_listeners", 00:04:56.671 "nvmf_subsystem_get_qpairs", 00:04:56.671 "nvmf_subsystem_get_controllers", 00:04:56.671 "nvmf_get_stats", 00:04:56.671 "nvmf_get_transports", 00:04:56.671 "nvmf_create_transport", 00:04:56.671 "nvmf_get_targets", 00:04:56.671 "nvmf_delete_target", 00:04:56.671 "nvmf_create_target", 00:04:56.671 "nvmf_subsystem_allow_any_host", 00:04:56.671 "nvmf_subsystem_remove_host", 00:04:56.671 "nvmf_subsystem_add_host", 00:04:56.671 "nvmf_subsystem_remove_ns", 00:04:56.671 "nvmf_subsystem_add_ns", 00:04:56.671 "nvmf_subsystem_listener_set_ana_state", 00:04:56.671 "nvmf_discovery_get_referrals", 00:04:56.671 "nvmf_discovery_remove_referral", 00:04:56.671 "nvmf_discovery_add_referral", 00:04:56.671 "nvmf_subsystem_remove_listener", 00:04:56.671 "nvmf_subsystem_add_listener", 00:04:56.671 "nvmf_delete_subsystem", 00:04:56.671 "nvmf_create_subsystem", 00:04:56.671 "nvmf_get_subsystems", 00:04:56.671 "env_dpdk_get_mem_stats", 00:04:56.671 "nbd_get_disks", 00:04:56.671 "nbd_stop_disk", 00:04:56.671 "nbd_start_disk", 00:04:56.671 "ublk_recover_disk", 00:04:56.671 "ublk_get_disks", 00:04:56.671 "ublk_stop_disk", 00:04:56.671 "ublk_start_disk", 00:04:56.671 "ublk_destroy_target", 00:04:56.671 "ublk_create_target", 00:04:56.671 "virtio_blk_create_transport", 00:04:56.671 "virtio_blk_get_transports", 00:04:56.671 "vhost_controller_set_coalescing", 00:04:56.671 "vhost_get_controllers", 00:04:56.671 "vhost_delete_controller", 00:04:56.671 "vhost_create_blk_controller", 00:04:56.671 "vhost_scsi_controller_remove_target", 00:04:56.671 "vhost_scsi_controller_add_target", 00:04:56.671 "vhost_start_scsi_controller", 00:04:56.671 "vhost_create_scsi_controller", 00:04:56.671 "thread_set_cpumask", 00:04:56.671 "framework_get_scheduler", 00:04:56.671 "framework_set_scheduler", 00:04:56.671 "framework_get_reactors", 00:04:56.671 "thread_get_io_channels", 00:04:56.671 "thread_get_pollers", 00:04:56.671 "thread_get_stats", 00:04:56.671 "framework_monitor_context_switch", 00:04:56.671 "spdk_kill_instance", 00:04:56.671 "log_enable_timestamps", 00:04:56.671 "log_get_flags", 00:04:56.671 "log_clear_flag", 00:04:56.671 "log_set_flag", 00:04:56.671 "log_get_level", 00:04:56.671 "log_set_level", 00:04:56.671 "log_get_print_level", 00:04:56.671 "log_set_print_level", 00:04:56.671 "framework_enable_cpumask_locks", 00:04:56.671 "framework_disable_cpumask_locks", 00:04:56.671 "framework_wait_init", 00:04:56.671 "framework_start_init", 00:04:56.671 "scsi_get_devices", 00:04:56.671 "bdev_get_histogram", 00:04:56.671 "bdev_enable_histogram", 00:04:56.671 "bdev_set_qos_limit", 00:04:56.671 "bdev_set_qd_sampling_period", 00:04:56.671 "bdev_get_bdevs", 00:04:56.671 "bdev_reset_iostat", 00:04:56.671 "bdev_get_iostat", 00:04:56.671 "bdev_examine", 00:04:56.671 "bdev_wait_for_examine", 00:04:56.671 "bdev_set_options", 00:04:56.671 "notify_get_notifications", 00:04:56.671 "notify_get_types", 00:04:56.671 "accel_get_stats", 00:04:56.671 "accel_set_options", 00:04:56.671 "accel_set_driver", 00:04:56.671 "accel_crypto_key_destroy", 00:04:56.671 "accel_crypto_keys_get", 00:04:56.671 "accel_crypto_key_create", 00:04:56.671 "accel_assign_opc", 00:04:56.671 "accel_get_module_info", 00:04:56.671 "accel_get_opc_assignments", 00:04:56.671 "vmd_rescan", 00:04:56.671 "vmd_remove_device", 00:04:56.671 "vmd_enable", 00:04:56.671 "sock_set_default_impl", 00:04:56.671 "sock_impl_set_options", 00:04:56.671 "sock_impl_get_options", 00:04:56.671 "iobuf_get_stats", 00:04:56.671 "iobuf_set_options", 00:04:56.671 "framework_get_pci_devices", 00:04:56.671 "framework_get_config", 00:04:56.671 "framework_get_subsystems", 00:04:56.671 "trace_get_info", 00:04:56.671 "trace_get_tpoint_group_mask", 00:04:56.671 "trace_disable_tpoint_group", 00:04:56.671 "trace_enable_tpoint_group", 00:04:56.671 "trace_clear_tpoint_mask", 00:04:56.671 "trace_set_tpoint_mask", 00:04:56.671 "spdk_get_version", 00:04:56.671 "rpc_get_methods" 00:04:56.671 ] 00:04:56.671 08:05:30 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.671 08:05:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:56.671 08:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:56.671 08:05:30 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.671 08:05:30 -- spdkcli/tcp.sh@38 -- # killprocess 2075375 00:04:56.671 08:05:30 -- common/autotest_common.sh@924 -- # '[' -z 2075375 ']' 00:04:56.671 08:05:30 -- common/autotest_common.sh@928 -- # kill -0 2075375 00:04:56.671 08:05:30 -- common/autotest_common.sh@929 -- # uname 00:04:56.671 08:05:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:56.671 08:05:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2075375 00:04:56.671 08:05:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:56.671 08:05:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:56.671 08:05:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2075375' 00:04:56.671 killing process with pid 2075375 00:04:56.671 08:05:30 -- common/autotest_common.sh@943 -- # kill 2075375 00:04:56.671 08:05:30 -- common/autotest_common.sh@948 -- # wait 2075375 00:04:57.240 00:04:57.240 real 0m1.497s 00:04:57.240 user 0m2.781s 00:04:57.240 sys 0m0.409s 00:04:57.240 08:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.240 08:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.240 ************************************ 00:04:57.240 END TEST spdkcli_tcp 00:04:57.240 ************************************ 00:04:57.240 08:05:30 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.240 08:05:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:57.240 08:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:57.240 08:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.241 ************************************ 00:04:57.241 START TEST dpdk_mem_utility 00:04:57.241 ************************************ 00:04:57.241 08:05:30 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.241 * Looking for test storage... 00:04:57.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.241 08:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.241 08:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2075672 00:04:57.241 08:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.241 08:05:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2075672 00:04:57.241 08:05:30 -- common/autotest_common.sh@817 -- # '[' -z 2075672 ']' 00:04:57.241 08:05:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.241 08:05:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.241 08:05:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.241 08:05:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.241 08:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.241 [2024-02-13 08:05:30.793522] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:57.241 [2024-02-13 08:05:30.793572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075672 ] 00:04:57.241 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.241 [2024-02-13 08:05:30.852499] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.241 [2024-02-13 08:05:30.920846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.241 [2024-02-13 08:05:30.920982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.180 08:05:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.180 08:05:31 -- common/autotest_common.sh@850 -- # return 0 00:04:58.180 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.180 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.180 08:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.180 08:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:58.180 { 00:04:58.180 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.180 } 00:04:58.180 08:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.180 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.180 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:58.180 1 heaps totaling size 814.000000 MiB 00:04:58.180 size: 814.000000 MiB heap id: 0 00:04:58.180 end heaps---------- 00:04:58.180 8 mempools totaling size 598.116089 MiB 00:04:58.180 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.181 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.181 size: 84.521057 MiB name: bdev_io_2075672 00:04:58.181 size: 51.011292 MiB name: evtpool_2075672 00:04:58.181 size: 50.003479 MiB name: msgpool_2075672 00:04:58.181 size: 21.763794 MiB name: PDU_Pool 00:04:58.181 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.181 size: 0.026123 MiB name: Session_Pool 00:04:58.181 end mempools------- 00:04:58.181 6 memzones totaling size 4.142822 MiB 00:04:58.181 size: 1.000366 MiB name: RG_ring_0_2075672 00:04:58.181 size: 1.000366 MiB name: RG_ring_1_2075672 00:04:58.181 size: 1.000366 MiB name: RG_ring_4_2075672 00:04:58.181 size: 1.000366 MiB name: RG_ring_5_2075672 00:04:58.181 size: 0.125366 MiB name: RG_ring_2_2075672 00:04:58.181 size: 0.015991 MiB name: RG_ring_3_2075672 00:04:58.181 end memzones------- 00:04:58.181 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.181 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:58.181 list of free elements. size: 12.519348 MiB 00:04:58.181 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:58.181 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:58.181 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:58.181 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:58.181 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:58.181 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:58.181 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:58.181 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:58.181 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:58.181 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:58.181 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:58.181 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:58.181 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:58.181 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:58.181 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:58.181 list of standard malloc elements. size: 199.218079 MiB 00:04:58.181 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:58.181 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:58.181 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:58.181 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:58.181 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:58.181 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.181 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:58.181 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.181 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:58.181 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:58.181 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:58.181 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:58.181 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:58.181 list of memzone associated elements. size: 602.262573 MiB 00:04:58.181 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:58.181 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.181 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:58.181 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.181 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:58.181 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2075672_0 00:04:58.181 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:58.181 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2075672_0 00:04:58.181 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:58.181 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2075672_0 00:04:58.181 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:58.181 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.181 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:58.181 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.181 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:58.181 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2075672 00:04:58.181 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:58.181 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2075672 00:04:58.181 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.181 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2075672 00:04:58.181 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:58.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.181 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:58.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.181 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:58.181 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.181 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:58.181 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.181 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:58.181 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2075672 00:04:58.181 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:58.181 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2075672 00:04:58.181 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:58.181 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2075672 00:04:58.181 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:58.181 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2075672 00:04:58.181 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:58.181 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2075672 00:04:58.181 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:58.181 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.181 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:58.181 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.181 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:58.181 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.181 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:58.181 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2075672 00:04:58.181 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:58.181 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.181 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:58.181 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.181 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:58.181 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2075672 00:04:58.181 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:58.181 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.181 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:58.181 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2075672 00:04:58.181 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:58.181 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2075672 00:04:58.181 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:58.181 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.181 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.181 08:05:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2075672 00:04:58.181 08:05:31 -- common/autotest_common.sh@924 -- # '[' -z 2075672 ']' 00:04:58.181 08:05:31 -- common/autotest_common.sh@928 -- # kill -0 2075672 00:04:58.181 08:05:31 -- common/autotest_common.sh@929 -- # uname 00:04:58.181 08:05:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:04:58.181 08:05:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2075672 00:04:58.181 08:05:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:04:58.181 08:05:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:04:58.181 08:05:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2075672' 00:04:58.181 killing process with pid 2075672 00:04:58.181 08:05:31 -- common/autotest_common.sh@943 -- # kill 2075672 00:04:58.181 08:05:31 -- common/autotest_common.sh@948 -- # wait 2075672 00:04:58.441 00:04:58.441 real 0m1.380s 00:04:58.441 user 0m1.471s 00:04:58.441 sys 0m0.360s 00:04:58.441 08:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.441 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.441 ************************************ 00:04:58.441 END TEST dpdk_mem_utility 00:04:58.441 ************************************ 00:04:58.442 08:05:32 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.442 08:05:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:58.442 08:05:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:58.442 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.442 ************************************ 00:04:58.442 START TEST event 00:04:58.442 ************************************ 00:04:58.442 08:05:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.701 * Looking for test storage... 00:04:58.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:58.701 08:05:32 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:58.701 08:05:32 -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.701 08:05:32 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.701 08:05:32 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:04:58.701 08:05:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:58.701 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.701 ************************************ 00:04:58.701 START TEST event_perf 00:04:58.701 ************************************ 00:04:58.701 08:05:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.701 Running I/O for 1 seconds...[2024-02-13 08:05:32.190911] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:04:58.701 [2024-02-13 08:05:32.190985] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075962 ] 00:04:58.701 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.701 [2024-02-13 08:05:32.255218] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.701 [2024-02-13 08:05:32.328176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.701 [2024-02-13 08:05:32.328271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.701 [2024-02-13 08:05:32.328379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.701 [2024-02-13 08:05:32.328381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.083 Running I/O for 1 seconds... 00:05:00.083 lcore 0: 202002 00:05:00.083 lcore 1: 202002 00:05:00.083 lcore 2: 202001 00:05:00.083 lcore 3: 202002 00:05:00.083 done. 00:05:00.083 00:05:00.083 real 0m1.244s 00:05:00.083 user 0m4.159s 00:05:00.083 sys 0m0.082s 00:05:00.083 08:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.083 08:05:33 -- common/autotest_common.sh@10 -- # set +x 00:05:00.083 ************************************ 00:05:00.083 END TEST event_perf 00:05:00.083 ************************************ 00:05:00.083 08:05:33 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.083 08:05:33 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:00.083 08:05:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:00.083 08:05:33 -- common/autotest_common.sh@10 -- # set +x 00:05:00.083 ************************************ 00:05:00.083 START TEST event_reactor 00:05:00.083 ************************************ 00:05:00.083 08:05:33 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.083 [2024-02-13 08:05:33.468902] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:00.083 [2024-02-13 08:05:33.468981] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076212 ] 00:05:00.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.083 [2024-02-13 08:05:33.529912] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.083 [2024-02-13 08:05:33.596481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.022 test_start 00:05:01.022 oneshot 00:05:01.022 tick 100 00:05:01.022 tick 100 00:05:01.022 tick 250 00:05:01.022 tick 100 00:05:01.022 tick 100 00:05:01.022 tick 100 00:05:01.022 tick 250 00:05:01.022 tick 500 00:05:01.022 tick 100 00:05:01.022 tick 100 00:05:01.022 tick 250 00:05:01.022 tick 100 00:05:01.022 tick 100 00:05:01.022 test_end 00:05:01.022 00:05:01.022 real 0m1.228s 00:05:01.022 user 0m1.151s 00:05:01.022 sys 0m0.072s 00:05:01.022 08:05:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.022 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:05:01.022 ************************************ 00:05:01.022 END TEST event_reactor 00:05:01.022 ************************************ 00:05:01.022 08:05:34 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.022 08:05:34 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:01.022 08:05:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:01.022 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:05:01.282 ************************************ 00:05:01.282 START TEST event_reactor_perf 00:05:01.282 ************************************ 00:05:01.282 08:05:34 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.282 [2024-02-13 08:05:34.732320] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:01.282 [2024-02-13 08:05:34.732398] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076461 ] 00:05:01.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.283 [2024-02-13 08:05:34.794204] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.283 [2024-02-13 08:05:34.860966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.665 test_start 00:05:02.665 test_end 00:05:02.665 Performance: 510734 events per second 00:05:02.665 00:05:02.665 real 0m1.234s 00:05:02.665 user 0m1.154s 00:05:02.665 sys 0m0.076s 00:05:02.665 08:05:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.665 08:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 ************************************ 00:05:02.665 END TEST event_reactor_perf 00:05:02.665 ************************************ 00:05:02.665 08:05:35 -- event/event.sh@49 -- # uname -s 00:05:02.665 08:05:35 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.665 08:05:35 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.665 08:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:02.665 08:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:02.665 08:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 ************************************ 00:05:02.665 START TEST event_scheduler 00:05:02.665 ************************************ 00:05:02.665 08:05:35 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.665 * Looking for test storage... 00:05:02.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:02.665 08:05:36 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.665 08:05:36 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2076739 00:05:02.665 08:05:36 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.665 08:05:36 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.665 08:05:36 -- scheduler/scheduler.sh@37 -- # waitforlisten 2076739 00:05:02.665 08:05:36 -- common/autotest_common.sh@817 -- # '[' -z 2076739 ']' 00:05:02.665 08:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.665 08:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.665 08:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.665 08:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.665 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.665 [2024-02-13 08:05:36.105344] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:02.665 [2024-02-13 08:05:36.105392] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076739 ] 00:05:02.665 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.665 [2024-02-13 08:05:36.159693] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.665 [2024-02-13 08:05:36.230354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.665 [2024-02-13 08:05:36.230444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.665 [2024-02-13 08:05:36.230531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.665 [2024-02-13 08:05:36.230533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.234 08:05:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.234 08:05:36 -- common/autotest_common.sh@850 -- # return 0 00:05:03.234 08:05:36 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.234 08:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.234 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:03.234 POWER: Env isn't set yet! 00:05:03.234 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:03.234 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.234 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.234 POWER: Attempting to initialise PSTAT power management... 00:05:03.494 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:03.494 POWER: Initialized successfully for lcore 0 power management 00:05:03.494 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:03.494 POWER: Initialized successfully for lcore 1 power management 00:05:03.494 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:03.494 POWER: Initialized successfully for lcore 2 power management 00:05:03.494 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:03.494 POWER: Initialized successfully for lcore 3 power management 00:05:03.494 08:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:36 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.494 08:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 [2024-02-13 08:05:37.033289] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.494 08:05:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:03.494 08:05:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 ************************************ 00:05:03.494 START TEST scheduler_create_thread 00:05:03.494 ************************************ 00:05:03.494 08:05:37 -- common/autotest_common.sh@1102 -- # scheduler_create_thread 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 2 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 3 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 4 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 5 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 6 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 7 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 8 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 9 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 10 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.494 08:05:37 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.494 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.494 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:04.063 08:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.063 08:05:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:04.063 08:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.063 08:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:05.442 08:05:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:05.442 08:05:39 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:05.442 08:05:39 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:05.442 08:05:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.442 08:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.822 08:05:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.822 00:05:06.822 real 0m3.099s 00:05:06.822 user 0m0.023s 00:05:06.822 sys 0m0.004s 00:05:06.822 08:05:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.822 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:06.822 ************************************ 00:05:06.822 END TEST scheduler_create_thread 00:05:06.822 ************************************ 00:05:06.822 08:05:40 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:06.822 08:05:40 -- scheduler/scheduler.sh@46 -- # killprocess 2076739 00:05:06.822 08:05:40 -- common/autotest_common.sh@924 -- # '[' -z 2076739 ']' 00:05:06.822 08:05:40 -- common/autotest_common.sh@928 -- # kill -0 2076739 00:05:06.822 08:05:40 -- common/autotest_common.sh@929 -- # uname 00:05:06.822 08:05:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:06.822 08:05:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2076739 00:05:06.822 08:05:40 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:06.822 08:05:40 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:06.822 08:05:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2076739' 00:05:06.822 killing process with pid 2076739 00:05:06.822 08:05:40 -- common/autotest_common.sh@943 -- # kill 2076739 00:05:06.822 08:05:40 -- common/autotest_common.sh@948 -- # wait 2076739 00:05:07.082 [2024-02-13 08:05:40.520473] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:07.082 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:07.082 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:07.082 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:07.082 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:07.082 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:07.082 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:07.082 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:07.082 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:07.082 00:05:07.082 real 0m4.765s 00:05:07.082 user 0m9.347s 00:05:07.082 sys 0m0.320s 00:05:07.082 08:05:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.082 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.082 ************************************ 00:05:07.082 END TEST event_scheduler 00:05:07.082 ************************************ 00:05:07.342 08:05:40 -- event/event.sh@51 -- # modprobe -n nbd 00:05:07.342 08:05:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:07.342 08:05:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:07.342 08:05:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:07.342 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.342 ************************************ 00:05:07.342 START TEST app_repeat 00:05:07.342 ************************************ 00:05:07.342 08:05:40 -- common/autotest_common.sh@1102 -- # app_repeat_test 00:05:07.342 08:05:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.342 08:05:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.342 08:05:40 -- event/event.sh@13 -- # local nbd_list 00:05:07.342 08:05:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.342 08:05:40 -- event/event.sh@14 -- # local bdev_list 00:05:07.342 08:05:40 -- event/event.sh@15 -- # local repeat_times=4 00:05:07.342 08:05:40 -- event/event.sh@17 -- # modprobe nbd 00:05:07.342 08:05:40 -- event/event.sh@19 -- # repeat_pid=2077483 00:05:07.342 08:05:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.342 08:05:40 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:07.342 08:05:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2077483' 00:05:07.342 Process app_repeat pid: 2077483 00:05:07.342 08:05:40 -- event/event.sh@23 -- # for i in {0..2} 00:05:07.342 08:05:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:07.342 spdk_app_start Round 0 00:05:07.342 08:05:40 -- event/event.sh@25 -- # waitforlisten 2077483 /var/tmp/spdk-nbd.sock 00:05:07.342 08:05:40 -- common/autotest_common.sh@817 -- # '[' -z 2077483 ']' 00:05:07.342 08:05:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.342 08:05:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.342 08:05:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.342 08:05:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.342 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.342 [2024-02-13 08:05:40.829322] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:07.342 [2024-02-13 08:05:40.829382] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077483 ] 00:05:07.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.342 [2024-02-13 08:05:40.891997] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.342 [2024-02-13 08:05:40.970711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.342 [2024-02-13 08:05:40.970715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.280 08:05:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.280 08:05:41 -- common/autotest_common.sh@850 -- # return 0 00:05:08.280 08:05:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.280 Malloc0 00:05:08.280 08:05:41 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.540 Malloc1 00:05:08.540 08:05:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@12 -- # local i 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.540 08:05:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.540 /dev/nbd0 00:05:08.540 08:05:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.540 08:05:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.540 08:05:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:08.540 08:05:42 -- common/autotest_common.sh@855 -- # local i 00:05:08.540 08:05:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:08.540 08:05:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:08.540 08:05:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:08.540 08:05:42 -- common/autotest_common.sh@859 -- # break 00:05:08.540 08:05:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:08.540 08:05:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:08.540 08:05:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.540 1+0 records in 00:05:08.540 1+0 records out 00:05:08.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181502 s, 22.6 MB/s 00:05:08.540 08:05:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.540 08:05:42 -- common/autotest_common.sh@872 -- # size=4096 00:05:08.540 08:05:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.540 08:05:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:08.540 08:05:42 -- common/autotest_common.sh@875 -- # return 0 00:05:08.540 08:05:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.540 08:05:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.540 08:05:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.800 /dev/nbd1 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.800 08:05:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:08.800 08:05:42 -- common/autotest_common.sh@855 -- # local i 00:05:08.800 08:05:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:08.800 08:05:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:08.800 08:05:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:08.800 08:05:42 -- common/autotest_common.sh@859 -- # break 00:05:08.800 08:05:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:08.800 08:05:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:08.800 08:05:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.800 1+0 records in 00:05:08.800 1+0 records out 00:05:08.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195289 s, 21.0 MB/s 00:05:08.800 08:05:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.800 08:05:42 -- common/autotest_common.sh@872 -- # size=4096 00:05:08.800 08:05:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.800 08:05:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:08.800 08:05:42 -- common/autotest_common.sh@875 -- # return 0 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.800 08:05:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.061 { 00:05:09.061 "nbd_device": "/dev/nbd0", 00:05:09.061 "bdev_name": "Malloc0" 00:05:09.061 }, 00:05:09.061 { 00:05:09.061 "nbd_device": "/dev/nbd1", 00:05:09.061 "bdev_name": "Malloc1" 00:05:09.061 } 00:05:09.061 ]' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.061 { 00:05:09.061 "nbd_device": "/dev/nbd0", 00:05:09.061 "bdev_name": "Malloc0" 00:05:09.061 }, 00:05:09.061 { 00:05:09.061 "nbd_device": "/dev/nbd1", 00:05:09.061 "bdev_name": "Malloc1" 00:05:09.061 } 00:05:09.061 ]' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.061 /dev/nbd1' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.061 /dev/nbd1' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.061 256+0 records in 00:05:09.061 256+0 records out 00:05:09.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102818 s, 102 MB/s 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.061 256+0 records in 00:05:09.061 256+0 records out 00:05:09.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135436 s, 77.4 MB/s 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.061 256+0 records in 00:05:09.061 256+0 records out 00:05:09.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146473 s, 71.6 MB/s 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@51 -- # local i 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.061 08:05:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@41 -- # break 00:05:09.320 08:05:42 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.321 08:05:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.321 08:05:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@41 -- # break 00:05:09.580 08:05:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@65 -- # true 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.581 08:05:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.581 08:05:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.840 08:05:43 -- event/event.sh@35 -- # sleep 3 00:05:10.141 [2024-02-13 08:05:43.638901] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.141 [2024-02-13 08:05:43.702776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.141 [2024-02-13 08:05:43.702778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.141 [2024-02-13 08:05:43.743672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.141 [2024-02-13 08:05:43.743714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.465 08:05:46 -- event/event.sh@23 -- # for i in {0..2} 00:05:13.465 08:05:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.465 spdk_app_start Round 1 00:05:13.465 08:05:46 -- event/event.sh@25 -- # waitforlisten 2077483 /var/tmp/spdk-nbd.sock 00:05:13.465 08:05:46 -- common/autotest_common.sh@817 -- # '[' -z 2077483 ']' 00:05:13.465 08:05:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.465 08:05:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.465 08:05:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.465 08:05:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.465 08:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:13.465 08:05:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.465 08:05:46 -- common/autotest_common.sh@850 -- # return 0 00:05:13.465 08:05:46 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.465 Malloc0 00:05:13.465 08:05:46 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.465 Malloc1 00:05:13.465 08:05:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@12 -- # local i 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.465 08:05:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.465 /dev/nbd0 00:05:13.465 08:05:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.465 08:05:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.465 08:05:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:13.465 08:05:47 -- common/autotest_common.sh@855 -- # local i 00:05:13.465 08:05:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:13.465 08:05:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:13.465 08:05:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:13.465 08:05:47 -- common/autotest_common.sh@859 -- # break 00:05:13.465 08:05:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:13.465 08:05:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:13.465 08:05:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.465 1+0 records in 00:05:13.465 1+0 records out 00:05:13.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176799 s, 23.2 MB/s 00:05:13.465 08:05:47 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.465 08:05:47 -- common/autotest_common.sh@872 -- # size=4096 00:05:13.465 08:05:47 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.725 08:05:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:13.725 08:05:47 -- common/autotest_common.sh@875 -- # return 0 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.725 /dev/nbd1 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.725 08:05:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:13.725 08:05:47 -- common/autotest_common.sh@855 -- # local i 00:05:13.725 08:05:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:13.725 08:05:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:13.725 08:05:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:13.725 08:05:47 -- common/autotest_common.sh@859 -- # break 00:05:13.725 08:05:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:13.725 08:05:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:13.725 08:05:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.725 1+0 records in 00:05:13.725 1+0 records out 00:05:13.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187234 s, 21.9 MB/s 00:05:13.725 08:05:47 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.725 08:05:47 -- common/autotest_common.sh@872 -- # size=4096 00:05:13.725 08:05:47 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.725 08:05:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:13.725 08:05:47 -- common/autotest_common.sh@875 -- # return 0 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.725 08:05:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.985 { 00:05:13.985 "nbd_device": "/dev/nbd0", 00:05:13.985 "bdev_name": "Malloc0" 00:05:13.985 }, 00:05:13.985 { 00:05:13.985 "nbd_device": "/dev/nbd1", 00:05:13.985 "bdev_name": "Malloc1" 00:05:13.985 } 00:05:13.985 ]' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.985 { 00:05:13.985 "nbd_device": "/dev/nbd0", 00:05:13.985 "bdev_name": "Malloc0" 00:05:13.985 }, 00:05:13.985 { 00:05:13.985 "nbd_device": "/dev/nbd1", 00:05:13.985 "bdev_name": "Malloc1" 00:05:13.985 } 00:05:13.985 ]' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.985 /dev/nbd1' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.985 /dev/nbd1' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.985 256+0 records in 00:05:13.985 256+0 records out 00:05:13.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010384 s, 101 MB/s 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.985 256+0 records in 00:05:13.985 256+0 records out 00:05:13.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139015 s, 75.4 MB/s 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.985 256+0 records in 00:05:13.985 256+0 records out 00:05:13.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143538 s, 73.1 MB/s 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@51 -- # local i 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.985 08:05:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@41 -- # break 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.245 08:05:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.504 08:05:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@41 -- # break 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.504 08:05:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@65 -- # true 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.764 08:05:48 -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.764 08:05:48 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.764 08:05:48 -- event/event.sh@35 -- # sleep 3 00:05:15.024 [2024-02-13 08:05:48.610798] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.024 [2024-02-13 08:05:48.674367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.024 [2024-02-13 08:05:48.674370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.283 [2024-02-13 08:05:48.715147] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.283 [2024-02-13 08:05:48.715189] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.822 08:05:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:17.822 08:05:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.822 spdk_app_start Round 2 00:05:17.822 08:05:51 -- event/event.sh@25 -- # waitforlisten 2077483 /var/tmp/spdk-nbd.sock 00:05:17.822 08:05:51 -- common/autotest_common.sh@817 -- # '[' -z 2077483 ']' 00:05:17.822 08:05:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.822 08:05:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.822 08:05:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.822 08:05:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.822 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.080 08:05:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.080 08:05:51 -- common/autotest_common.sh@850 -- # return 0 00:05:18.080 08:05:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.080 Malloc0 00:05:18.080 08:05:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.340 Malloc1 00:05:18.340 08:05:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@12 -- # local i 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.340 08:05:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.599 /dev/nbd0 00:05:18.599 08:05:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.599 08:05:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.599 08:05:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:18.599 08:05:52 -- common/autotest_common.sh@855 -- # local i 00:05:18.599 08:05:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:18.599 08:05:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:18.599 08:05:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:18.599 08:05:52 -- common/autotest_common.sh@859 -- # break 00:05:18.599 08:05:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:18.599 08:05:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:18.599 08:05:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.599 1+0 records in 00:05:18.599 1+0 records out 00:05:18.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231953 s, 17.7 MB/s 00:05:18.600 08:05:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.600 08:05:52 -- common/autotest_common.sh@872 -- # size=4096 00:05:18.600 08:05:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.600 08:05:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:18.600 08:05:52 -- common/autotest_common.sh@875 -- # return 0 00:05:18.600 08:05:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.600 08:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.600 08:05:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.860 /dev/nbd1 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.860 08:05:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:18.860 08:05:52 -- common/autotest_common.sh@855 -- # local i 00:05:18.860 08:05:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:18.860 08:05:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:18.860 08:05:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:18.860 08:05:52 -- common/autotest_common.sh@859 -- # break 00:05:18.860 08:05:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:18.860 08:05:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:18.860 08:05:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.860 1+0 records in 00:05:18.860 1+0 records out 00:05:18.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199543 s, 20.5 MB/s 00:05:18.860 08:05:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.860 08:05:52 -- common/autotest_common.sh@872 -- # size=4096 00:05:18.860 08:05:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.860 08:05:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:18.860 08:05:52 -- common/autotest_common.sh@875 -- # return 0 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.860 { 00:05:18.860 "nbd_device": "/dev/nbd0", 00:05:18.860 "bdev_name": "Malloc0" 00:05:18.860 }, 00:05:18.860 { 00:05:18.860 "nbd_device": "/dev/nbd1", 00:05:18.860 "bdev_name": "Malloc1" 00:05:18.860 } 00:05:18.860 ]' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.860 { 00:05:18.860 "nbd_device": "/dev/nbd0", 00:05:18.860 "bdev_name": "Malloc0" 00:05:18.860 }, 00:05:18.860 { 00:05:18.860 "nbd_device": "/dev/nbd1", 00:05:18.860 "bdev_name": "Malloc1" 00:05:18.860 } 00:05:18.860 ]' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.860 /dev/nbd1' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.860 /dev/nbd1' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.860 08:05:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.120 256+0 records in 00:05:19.120 256+0 records out 00:05:19.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103405 s, 101 MB/s 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.120 256+0 records in 00:05:19.120 256+0 records out 00:05:19.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136678 s, 76.7 MB/s 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.120 256+0 records in 00:05:19.120 256+0 records out 00:05:19.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01437 s, 73.0 MB/s 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@41 -- # break 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.120 08:05:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@41 -- # break 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.380 08:05:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.639 08:05:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@65 -- # true 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.640 08:05:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.640 08:05:53 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.899 08:05:53 -- event/event.sh@35 -- # sleep 3 00:05:20.158 [2024-02-13 08:05:53.592475] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.158 [2024-02-13 08:05:53.656061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.158 [2024-02-13 08:05:53.656063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.158 [2024-02-13 08:05:53.696747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.158 [2024-02-13 08:05:53.696789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.452 08:05:56 -- event/event.sh@38 -- # waitforlisten 2077483 /var/tmp/spdk-nbd.sock 00:05:23.452 08:05:56 -- common/autotest_common.sh@817 -- # '[' -z 2077483 ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.452 08:05:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.452 08:05:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.452 08:05:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.452 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 08:05:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.452 08:05:56 -- common/autotest_common.sh@850 -- # return 0 00:05:23.452 08:05:56 -- event/event.sh@39 -- # killprocess 2077483 00:05:23.452 08:05:56 -- common/autotest_common.sh@924 -- # '[' -z 2077483 ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@928 -- # kill -0 2077483 00:05:23.452 08:05:56 -- common/autotest_common.sh@929 -- # uname 00:05:23.452 08:05:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2077483 00:05:23.452 08:05:56 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:23.452 08:05:56 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2077483' 00:05:23.452 killing process with pid 2077483 00:05:23.452 08:05:56 -- common/autotest_common.sh@943 -- # kill 2077483 00:05:23.452 08:05:56 -- common/autotest_common.sh@948 -- # wait 2077483 00:05:23.452 spdk_app_start is called in Round 0. 00:05:23.452 Shutdown signal received, stop current app iteration 00:05:23.452 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:05:23.452 spdk_app_start is called in Round 1. 00:05:23.452 Shutdown signal received, stop current app iteration 00:05:23.452 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:05:23.452 spdk_app_start is called in Round 2. 00:05:23.452 Shutdown signal received, stop current app iteration 00:05:23.452 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:05:23.452 spdk_app_start is called in Round 3. 00:05:23.452 Shutdown signal received, stop current app iteration 00:05:23.452 08:05:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:23.452 08:05:56 -- event/event.sh@42 -- # return 0 00:05:23.452 00:05:23.452 real 0m15.991s 00:05:23.452 user 0m34.406s 00:05:23.452 sys 0m2.295s 00:05:23.452 08:05:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.452 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 END TEST app_repeat 00:05:23.452 ************************************ 00:05:23.452 08:05:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.452 08:05:56 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:23.452 08:05:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:23.452 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 START TEST cpu_locks 00:05:23.452 ************************************ 00:05:23.452 08:05:56 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:23.452 * Looking for test storage... 00:05:23.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:23.452 08:05:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.452 08:05:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.452 08:05:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.452 08:05:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.452 08:05:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:23.452 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 ************************************ 00:05:23.452 START TEST default_locks 00:05:23.452 ************************************ 00:05:23.452 08:05:56 -- common/autotest_common.sh@1102 -- # default_locks 00:05:23.452 08:05:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2080474 00:05:23.452 08:05:56 -- event/cpu_locks.sh@47 -- # waitforlisten 2080474 00:05:23.452 08:05:56 -- common/autotest_common.sh@817 -- # '[' -z 2080474 ']' 00:05:23.452 08:05:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.452 08:05:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.452 08:05:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.452 08:05:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.452 08:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:23.452 08:05:56 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.452 [2024-02-13 08:05:56.961254] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:23.452 [2024-02-13 08:05:56.961307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080474 ] 00:05:23.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.452 [2024-02-13 08:05:57.022096] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.452 [2024-02-13 08:05:57.097484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.452 [2024-02-13 08:05:57.097594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.390 08:05:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.391 08:05:57 -- common/autotest_common.sh@850 -- # return 0 00:05:24.391 08:05:57 -- event/cpu_locks.sh@49 -- # locks_exist 2080474 00:05:24.391 08:05:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.391 08:05:57 -- event/cpu_locks.sh@22 -- # lslocks -p 2080474 00:05:24.650 lslocks: write error 00:05:24.650 08:05:58 -- event/cpu_locks.sh@50 -- # killprocess 2080474 00:05:24.650 08:05:58 -- common/autotest_common.sh@924 -- # '[' -z 2080474 ']' 00:05:24.650 08:05:58 -- common/autotest_common.sh@928 -- # kill -0 2080474 00:05:24.650 08:05:58 -- common/autotest_common.sh@929 -- # uname 00:05:24.650 08:05:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:24.650 08:05:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2080474 00:05:24.650 08:05:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:24.650 08:05:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:24.650 08:05:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2080474' 00:05:24.650 killing process with pid 2080474 00:05:24.650 08:05:58 -- common/autotest_common.sh@943 -- # kill 2080474 00:05:24.650 08:05:58 -- common/autotest_common.sh@948 -- # wait 2080474 00:05:24.910 08:05:58 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2080474 00:05:24.910 08:05:58 -- common/autotest_common.sh@638 -- # local es=0 00:05:24.910 08:05:58 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2080474 00:05:24.910 08:05:58 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:24.910 08:05:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.910 08:05:58 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:24.910 08:05:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:24.910 08:05:58 -- common/autotest_common.sh@641 -- # waitforlisten 2080474 00:05:24.910 08:05:58 -- common/autotest_common.sh@817 -- # '[' -z 2080474 ']' 00:05:24.910 08:05:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.910 08:05:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.910 08:05:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.910 08:05:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.910 08:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2080474) - No such process 00:05:24.910 ERROR: process (pid: 2080474) is no longer running 00:05:24.910 08:05:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.910 08:05:58 -- common/autotest_common.sh@850 -- # return 1 00:05:24.910 08:05:58 -- common/autotest_common.sh@641 -- # es=1 00:05:24.910 08:05:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:24.910 08:05:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:24.910 08:05:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:24.910 08:05:58 -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.910 08:05:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.910 08:05:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.910 08:05:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.910 00:05:24.910 real 0m1.573s 00:05:24.910 user 0m1.654s 00:05:24.910 sys 0m0.497s 00:05:24.910 08:05:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.910 08:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 ************************************ 00:05:24.910 END TEST default_locks 00:05:24.910 ************************************ 00:05:24.910 08:05:58 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.910 08:05:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:24.910 08:05:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:24.910 08:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 ************************************ 00:05:24.910 START TEST default_locks_via_rpc 00:05:24.910 ************************************ 00:05:24.910 08:05:58 -- common/autotest_common.sh@1102 -- # default_locks_via_rpc 00:05:24.910 08:05:58 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2080739 00:05:24.910 08:05:58 -- event/cpu_locks.sh@63 -- # waitforlisten 2080739 00:05:24.910 08:05:58 -- common/autotest_common.sh@817 -- # '[' -z 2080739 ']' 00:05:24.910 08:05:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.910 08:05:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.910 08:05:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.910 08:05:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.910 08:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 08:05:58 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.910 [2024-02-13 08:05:58.564551] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:24.910 [2024-02-13 08:05:58.564600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080739 ] 00:05:24.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.170 [2024-02-13 08:05:58.623384] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.170 [2024-02-13 08:05:58.697539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.170 [2024-02-13 08:05:58.697658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.739 08:05:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.739 08:05:59 -- common/autotest_common.sh@850 -- # return 0 00:05:25.739 08:05:59 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.739 08:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:25.739 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:25.739 08:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:25.739 08:05:59 -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.739 08:05:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.739 08:05:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.739 08:05:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.739 08:05:59 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.739 08:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:25.739 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:25.739 08:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:25.739 08:05:59 -- event/cpu_locks.sh@71 -- # locks_exist 2080739 00:05:25.739 08:05:59 -- event/cpu_locks.sh@22 -- # lslocks -p 2080739 00:05:25.739 08:05:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.999 08:05:59 -- event/cpu_locks.sh@73 -- # killprocess 2080739 00:05:25.999 08:05:59 -- common/autotest_common.sh@924 -- # '[' -z 2080739 ']' 00:05:25.999 08:05:59 -- common/autotest_common.sh@928 -- # kill -0 2080739 00:05:25.999 08:05:59 -- common/autotest_common.sh@929 -- # uname 00:05:25.999 08:05:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:25.999 08:05:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2080739 00:05:25.999 08:05:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:25.999 08:05:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:25.999 08:05:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2080739' 00:05:25.999 killing process with pid 2080739 00:05:25.999 08:05:59 -- common/autotest_common.sh@943 -- # kill 2080739 00:05:25.999 08:05:59 -- common/autotest_common.sh@948 -- # wait 2080739 00:05:26.259 00:05:26.259 real 0m1.379s 00:05:26.259 user 0m1.435s 00:05:26.259 sys 0m0.418s 00:05:26.259 08:05:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.259 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 ************************************ 00:05:26.259 END TEST default_locks_via_rpc 00:05:26.259 ************************************ 00:05:26.259 08:05:59 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.259 08:05:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:26.259 08:05:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:26.259 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 ************************************ 00:05:26.259 START TEST non_locking_app_on_locked_coremask 00:05:26.259 ************************************ 00:05:26.259 08:05:59 -- common/autotest_common.sh@1102 -- # non_locking_app_on_locked_coremask 00:05:26.259 08:05:59 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2081000 00:05:26.259 08:05:59 -- event/cpu_locks.sh@81 -- # waitforlisten 2081000 /var/tmp/spdk.sock 00:05:26.259 08:05:59 -- common/autotest_common.sh@817 -- # '[' -z 2081000 ']' 00:05:26.259 08:05:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.259 08:05:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.259 08:05:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.259 08:05:59 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.259 08:05:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.259 08:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:26.518 [2024-02-13 08:05:59.980064] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:26.518 [2024-02-13 08:05:59.980114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081000 ] 00:05:26.518 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.518 [2024-02-13 08:06:00.041203] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.518 [2024-02-13 08:06:00.119579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.518 [2024-02-13 08:06:00.119697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.457 08:06:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.457 08:06:00 -- common/autotest_common.sh@850 -- # return 0 00:05:27.457 08:06:00 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2081283 00:05:27.457 08:06:00 -- event/cpu_locks.sh@85 -- # waitforlisten 2081283 /var/tmp/spdk2.sock 00:05:27.457 08:06:00 -- common/autotest_common.sh@817 -- # '[' -z 2081283 ']' 00:05:27.457 08:06:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.457 08:06:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.457 08:06:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.457 08:06:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.457 08:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:27.457 08:06:00 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.457 [2024-02-13 08:06:00.820672] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:27.457 [2024-02-13 08:06:00.820722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081283 ] 00:05:27.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.457 [2024-02-13 08:06:00.901739] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.457 [2024-02-13 08:06:00.901765] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.457 [2024-02-13 08:06:01.049285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.457 [2024-02-13 08:06:01.049421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.025 08:06:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.025 08:06:01 -- common/autotest_common.sh@850 -- # return 0 00:05:28.025 08:06:01 -- event/cpu_locks.sh@87 -- # locks_exist 2081000 00:05:28.025 08:06:01 -- event/cpu_locks.sh@22 -- # lslocks -p 2081000 00:05:28.025 08:06:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.623 lslocks: write error 00:05:28.623 08:06:02 -- event/cpu_locks.sh@89 -- # killprocess 2081000 00:05:28.623 08:06:02 -- common/autotest_common.sh@924 -- # '[' -z 2081000 ']' 00:05:28.623 08:06:02 -- common/autotest_common.sh@928 -- # kill -0 2081000 00:05:28.623 08:06:02 -- common/autotest_common.sh@929 -- # uname 00:05:28.623 08:06:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:28.623 08:06:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2081000 00:05:28.623 08:06:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:28.623 08:06:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:28.623 08:06:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2081000' 00:05:28.623 killing process with pid 2081000 00:05:28.623 08:06:02 -- common/autotest_common.sh@943 -- # kill 2081000 00:05:28.623 08:06:02 -- common/autotest_common.sh@948 -- # wait 2081000 00:05:29.206 08:06:02 -- event/cpu_locks.sh@90 -- # killprocess 2081283 00:05:29.206 08:06:02 -- common/autotest_common.sh@924 -- # '[' -z 2081283 ']' 00:05:29.206 08:06:02 -- common/autotest_common.sh@928 -- # kill -0 2081283 00:05:29.206 08:06:02 -- common/autotest_common.sh@929 -- # uname 00:05:29.206 08:06:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:29.206 08:06:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2081283 00:05:29.206 08:06:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:29.206 08:06:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:29.206 08:06:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2081283' 00:05:29.206 killing process with pid 2081283 00:05:29.206 08:06:02 -- common/autotest_common.sh@943 -- # kill 2081283 00:05:29.206 08:06:02 -- common/autotest_common.sh@948 -- # wait 2081283 00:05:29.775 00:05:29.776 real 0m3.250s 00:05:29.776 user 0m3.472s 00:05:29.776 sys 0m0.894s 00:05:29.776 08:06:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.776 08:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.776 ************************************ 00:05:29.776 END TEST non_locking_app_on_locked_coremask 00:05:29.776 ************************************ 00:05:29.776 08:06:03 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:29.776 08:06:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:29.776 08:06:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:29.776 08:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.776 ************************************ 00:05:29.776 START TEST locking_app_on_unlocked_coremask 00:05:29.776 ************************************ 00:05:29.776 08:06:03 -- common/autotest_common.sh@1102 -- # locking_app_on_unlocked_coremask 00:05:29.776 08:06:03 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2081842 00:05:29.776 08:06:03 -- event/cpu_locks.sh@99 -- # waitforlisten 2081842 /var/tmp/spdk.sock 00:05:29.776 08:06:03 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:29.776 08:06:03 -- common/autotest_common.sh@817 -- # '[' -z 2081842 ']' 00:05:29.776 08:06:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.776 08:06:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.776 08:06:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.776 08:06:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.776 08:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.776 [2024-02-13 08:06:03.273465] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:29.776 [2024-02-13 08:06:03.273513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081842 ] 00:05:29.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.776 [2024-02-13 08:06:03.332792] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.776 [2024-02-13 08:06:03.332821] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.776 [2024-02-13 08:06:03.396896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.776 [2024-02-13 08:06:03.397031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.713 08:06:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.713 08:06:04 -- common/autotest_common.sh@850 -- # return 0 00:05:30.713 08:06:04 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2081861 00:05:30.713 08:06:04 -- event/cpu_locks.sh@103 -- # waitforlisten 2081861 /var/tmp/spdk2.sock 00:05:30.713 08:06:04 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.713 08:06:04 -- common/autotest_common.sh@817 -- # '[' -z 2081861 ']' 00:05:30.713 08:06:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.713 08:06:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.713 08:06:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.713 08:06:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.713 08:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.713 [2024-02-13 08:06:04.102878] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:30.714 [2024-02-13 08:06:04.102928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081861 ] 00:05:30.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.714 [2024-02-13 08:06:04.185568] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.714 [2024-02-13 08:06:04.327135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.714 [2024-02-13 08:06:04.327278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.282 08:06:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.282 08:06:04 -- common/autotest_common.sh@850 -- # return 0 00:05:31.282 08:06:04 -- event/cpu_locks.sh@105 -- # locks_exist 2081861 00:05:31.282 08:06:04 -- event/cpu_locks.sh@22 -- # lslocks -p 2081861 00:05:31.282 08:06:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.851 lslocks: write error 00:05:31.851 08:06:05 -- event/cpu_locks.sh@107 -- # killprocess 2081842 00:05:31.851 08:06:05 -- common/autotest_common.sh@924 -- # '[' -z 2081842 ']' 00:05:31.851 08:06:05 -- common/autotest_common.sh@928 -- # kill -0 2081842 00:05:31.851 08:06:05 -- common/autotest_common.sh@929 -- # uname 00:05:31.851 08:06:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:31.851 08:06:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2081842 00:05:31.851 08:06:05 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:31.851 08:06:05 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:31.851 08:06:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2081842' 00:05:31.851 killing process with pid 2081842 00:05:31.851 08:06:05 -- common/autotest_common.sh@943 -- # kill 2081842 00:05:31.851 08:06:05 -- common/autotest_common.sh@948 -- # wait 2081842 00:05:32.420 08:06:06 -- event/cpu_locks.sh@108 -- # killprocess 2081861 00:05:32.420 08:06:06 -- common/autotest_common.sh@924 -- # '[' -z 2081861 ']' 00:05:32.420 08:06:06 -- common/autotest_common.sh@928 -- # kill -0 2081861 00:05:32.420 08:06:06 -- common/autotest_common.sh@929 -- # uname 00:05:32.420 08:06:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:32.420 08:06:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2081861 00:05:32.420 08:06:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:32.420 08:06:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:32.420 08:06:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2081861' 00:05:32.420 killing process with pid 2081861 00:05:32.420 08:06:06 -- common/autotest_common.sh@943 -- # kill 2081861 00:05:32.420 08:06:06 -- common/autotest_common.sh@948 -- # wait 2081861 00:05:32.989 00:05:32.989 real 0m3.211s 00:05:32.989 user 0m3.421s 00:05:32.990 sys 0m0.898s 00:05:32.990 08:06:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.990 08:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.990 ************************************ 00:05:32.990 END TEST locking_app_on_unlocked_coremask 00:05:32.990 ************************************ 00:05:32.990 08:06:06 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:32.990 08:06:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:32.990 08:06:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:32.990 08:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.990 ************************************ 00:05:32.990 START TEST locking_app_on_locked_coremask 00:05:32.990 ************************************ 00:05:32.990 08:06:06 -- common/autotest_common.sh@1102 -- # locking_app_on_locked_coremask 00:05:32.990 08:06:06 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2082355 00:05:32.990 08:06:06 -- event/cpu_locks.sh@116 -- # waitforlisten 2082355 /var/tmp/spdk.sock 00:05:32.990 08:06:06 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.990 08:06:06 -- common/autotest_common.sh@817 -- # '[' -z 2082355 ']' 00:05:32.990 08:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.990 08:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.990 08:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.990 08:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.990 08:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.990 [2024-02-13 08:06:06.520888] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:32.990 [2024-02-13 08:06:06.520936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082355 ] 00:05:32.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.990 [2024-02-13 08:06:06.580288] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.990 [2024-02-13 08:06:06.655483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.990 [2024-02-13 08:06:06.655618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.929 08:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.929 08:06:07 -- common/autotest_common.sh@850 -- # return 0 00:05:33.929 08:06:07 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2082579 00:05:33.929 08:06:07 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2082579 /var/tmp/spdk2.sock 00:05:33.929 08:06:07 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.929 08:06:07 -- common/autotest_common.sh@638 -- # local es=0 00:05:33.929 08:06:07 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2082579 /var/tmp/spdk2.sock 00:05:33.929 08:06:07 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:33.929 08:06:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:33.929 08:06:07 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:33.929 08:06:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:33.929 08:06:07 -- common/autotest_common.sh@641 -- # waitforlisten 2082579 /var/tmp/spdk2.sock 00:05:33.929 08:06:07 -- common/autotest_common.sh@817 -- # '[' -z 2082579 ']' 00:05:33.929 08:06:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.929 08:06:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.929 08:06:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.929 08:06:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.929 08:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:33.929 [2024-02-13 08:06:07.363478] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:33.929 [2024-02-13 08:06:07.363520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082579 ] 00:05:33.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.929 [2024-02-13 08:06:07.444436] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2082355 has claimed it. 00:05:33.929 [2024-02-13 08:06:07.444476] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2082579) - No such process 00:05:34.497 ERROR: process (pid: 2082579) is no longer running 00:05:34.497 08:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.497 08:06:07 -- common/autotest_common.sh@850 -- # return 1 00:05:34.497 08:06:07 -- common/autotest_common.sh@641 -- # es=1 00:05:34.497 08:06:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:34.497 08:06:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:34.497 08:06:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:34.497 08:06:07 -- event/cpu_locks.sh@122 -- # locks_exist 2082355 00:05:34.497 08:06:07 -- event/cpu_locks.sh@22 -- # lslocks -p 2082355 00:05:34.497 08:06:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.756 lslocks: write error 00:05:34.756 08:06:08 -- event/cpu_locks.sh@124 -- # killprocess 2082355 00:05:34.756 08:06:08 -- common/autotest_common.sh@924 -- # '[' -z 2082355 ']' 00:05:34.756 08:06:08 -- common/autotest_common.sh@928 -- # kill -0 2082355 00:05:34.756 08:06:08 -- common/autotest_common.sh@929 -- # uname 00:05:34.756 08:06:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:34.756 08:06:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2082355 00:05:34.756 08:06:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:34.756 08:06:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:34.756 08:06:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2082355' 00:05:34.756 killing process with pid 2082355 00:05:34.756 08:06:08 -- common/autotest_common.sh@943 -- # kill 2082355 00:05:34.756 08:06:08 -- common/autotest_common.sh@948 -- # wait 2082355 00:05:35.325 00:05:35.325 real 0m2.287s 00:05:35.325 user 0m2.498s 00:05:35.325 sys 0m0.611s 00:05:35.325 08:06:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.325 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.325 ************************************ 00:05:35.325 END TEST locking_app_on_locked_coremask 00:05:35.325 ************************************ 00:05:35.325 08:06:08 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.325 08:06:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:35.325 08:06:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:35.325 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.325 ************************************ 00:05:35.325 START TEST locking_overlapped_coremask 00:05:35.325 ************************************ 00:05:35.325 08:06:08 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask 00:05:35.325 08:06:08 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2083013 00:05:35.325 08:06:08 -- event/cpu_locks.sh@133 -- # waitforlisten 2083013 /var/tmp/spdk.sock 00:05:35.325 08:06:08 -- common/autotest_common.sh@817 -- # '[' -z 2083013 ']' 00:05:35.325 08:06:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.325 08:06:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.325 08:06:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.325 08:06:08 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.325 08:06:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.325 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:35.325 [2024-02-13 08:06:08.842120] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:35.325 [2024-02-13 08:06:08.842169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083013 ] 00:05:35.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.325 [2024-02-13 08:06:08.900094] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.325 [2024-02-13 08:06:08.976749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.325 [2024-02-13 08:06:08.976893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.325 [2024-02-13 08:06:08.977008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.325 [2024-02-13 08:06:08.977010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.264 08:06:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.264 08:06:09 -- common/autotest_common.sh@850 -- # return 0 00:05:36.264 08:06:09 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2083254 00:05:36.264 08:06:09 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2083254 /var/tmp/spdk2.sock 00:05:36.264 08:06:09 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:36.264 08:06:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:36.264 08:06:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2083254 /var/tmp/spdk2.sock 00:05:36.264 08:06:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:36.264 08:06:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.264 08:06:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:36.264 08:06:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.264 08:06:09 -- common/autotest_common.sh@641 -- # waitforlisten 2083254 /var/tmp/spdk2.sock 00:05:36.264 08:06:09 -- common/autotest_common.sh@817 -- # '[' -z 2083254 ']' 00:05:36.264 08:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.264 08:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.264 08:06:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.264 08:06:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.264 08:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.264 [2024-02-13 08:06:09.690155] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:36.264 [2024-02-13 08:06:09.690204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083254 ] 00:05:36.264 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.264 [2024-02-13 08:06:09.770965] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2083013 has claimed it. 00:05:36.264 [2024-02-13 08:06:09.771001] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2083254) - No such process 00:05:36.833 ERROR: process (pid: 2083254) is no longer running 00:05:36.833 08:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.833 08:06:10 -- common/autotest_common.sh@850 -- # return 1 00:05:36.833 08:06:10 -- common/autotest_common.sh@641 -- # es=1 00:05:36.833 08:06:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:36.833 08:06:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:36.833 08:06:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:36.833 08:06:10 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:36.833 08:06:10 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.833 08:06:10 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.833 08:06:10 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.833 08:06:10 -- event/cpu_locks.sh@141 -- # killprocess 2083013 00:05:36.833 08:06:10 -- common/autotest_common.sh@924 -- # '[' -z 2083013 ']' 00:05:36.833 08:06:10 -- common/autotest_common.sh@928 -- # kill -0 2083013 00:05:36.833 08:06:10 -- common/autotest_common.sh@929 -- # uname 00:05:36.833 08:06:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:36.833 08:06:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2083013 00:05:36.833 08:06:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:36.833 08:06:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:36.833 08:06:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2083013' 00:05:36.833 killing process with pid 2083013 00:05:36.833 08:06:10 -- common/autotest_common.sh@943 -- # kill 2083013 00:05:36.833 08:06:10 -- common/autotest_common.sh@948 -- # wait 2083013 00:05:37.093 00:05:37.093 real 0m1.899s 00:05:37.093 user 0m5.328s 00:05:37.093 sys 0m0.394s 00:05:37.093 08:06:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.093 08:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.093 ************************************ 00:05:37.093 END TEST locking_overlapped_coremask 00:05:37.093 ************************************ 00:05:37.093 08:06:10 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.093 08:06:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:37.093 08:06:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:37.093 08:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.093 ************************************ 00:05:37.093 START TEST locking_overlapped_coremask_via_rpc 00:05:37.093 ************************************ 00:05:37.093 08:06:10 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask_via_rpc 00:05:37.093 08:06:10 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2083500 00:05:37.093 08:06:10 -- event/cpu_locks.sh@149 -- # waitforlisten 2083500 /var/tmp/spdk.sock 00:05:37.093 08:06:10 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.093 08:06:10 -- common/autotest_common.sh@817 -- # '[' -z 2083500 ']' 00:05:37.093 08:06:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.093 08:06:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.093 08:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.093 08:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.093 08:06:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.353 [2024-02-13 08:06:10.782156] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:37.353 [2024-02-13 08:06:10.782206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083500 ] 00:05:37.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.353 [2024-02-13 08:06:10.840533] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.353 [2024-02-13 08:06:10.840559] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.353 [2024-02-13 08:06:10.916885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.353 [2024-02-13 08:06:10.917026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.353 [2024-02-13 08:06:10.917144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.353 [2024-02-13 08:06:10.917145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.922 08:06:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.922 08:06:11 -- common/autotest_common.sh@850 -- # return 0 00:05:37.922 08:06:11 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2083735 00:05:37.922 08:06:11 -- event/cpu_locks.sh@153 -- # waitforlisten 2083735 /var/tmp/spdk2.sock 00:05:37.922 08:06:11 -- common/autotest_common.sh@817 -- # '[' -z 2083735 ']' 00:05:37.922 08:06:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.922 08:06:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.922 08:06:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.922 08:06:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.922 08:06:11 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:37.922 08:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.181 [2024-02-13 08:06:11.628194] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:38.181 [2024-02-13 08:06:11.628240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083735 ] 00:05:38.181 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.181 [2024-02-13 08:06:11.711952] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.181 [2024-02-13 08:06:11.711975] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.181 [2024-02-13 08:06:11.850482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.181 [2024-02-13 08:06:11.850628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.181 [2024-02-13 08:06:11.853695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.181 [2024-02-13 08:06:11.853696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:38.749 08:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.749 08:06:12 -- common/autotest_common.sh@850 -- # return 0 00:05:38.749 08:06:12 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.749 08:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.749 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:38.749 08:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.749 08:06:12 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.749 08:06:12 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.749 08:06:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.749 08:06:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:38.749 08:06:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.749 08:06:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:38.749 08:06:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.749 08:06:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.749 08:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.749 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:38.749 [2024-02-13 08:06:12.436716] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2083500 has claimed it. 00:05:39.009 request: 00:05:39.009 { 00:05:39.009 "method": "framework_enable_cpumask_locks", 00:05:39.009 "req_id": 1 00:05:39.009 } 00:05:39.009 Got JSON-RPC error response 00:05:39.009 response: 00:05:39.009 { 00:05:39.009 "code": -32603, 00:05:39.009 "message": "Failed to claim CPU core: 2" 00:05:39.009 } 00:05:39.009 08:06:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:39.009 08:06:12 -- common/autotest_common.sh@641 -- # es=1 00:05:39.009 08:06:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:39.009 08:06:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:39.009 08:06:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:39.009 08:06:12 -- event/cpu_locks.sh@158 -- # waitforlisten 2083500 /var/tmp/spdk.sock 00:05:39.009 08:06:12 -- common/autotest_common.sh@817 -- # '[' -z 2083500 ']' 00:05:39.009 08:06:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.009 08:06:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.009 08:06:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.009 08:06:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.009 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.009 08:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.009 08:06:12 -- common/autotest_common.sh@850 -- # return 0 00:05:39.009 08:06:12 -- event/cpu_locks.sh@159 -- # waitforlisten 2083735 /var/tmp/spdk2.sock 00:05:39.009 08:06:12 -- common/autotest_common.sh@817 -- # '[' -z 2083735 ']' 00:05:39.009 08:06:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.009 08:06:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.010 08:06:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.010 08:06:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.010 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.270 08:06:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.270 08:06:12 -- common/autotest_common.sh@850 -- # return 0 00:05:39.270 08:06:12 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:39.270 08:06:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.270 08:06:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.270 08:06:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.270 00:05:39.270 real 0m2.085s 00:05:39.270 user 0m0.836s 00:05:39.270 sys 0m0.178s 00:05:39.270 08:06:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.270 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.270 ************************************ 00:05:39.270 END TEST locking_overlapped_coremask_via_rpc 00:05:39.270 ************************************ 00:05:39.270 08:06:12 -- event/cpu_locks.sh@174 -- # cleanup 00:05:39.270 08:06:12 -- event/cpu_locks.sh@15 -- # [[ -z 2083500 ]] 00:05:39.270 08:06:12 -- event/cpu_locks.sh@15 -- # killprocess 2083500 00:05:39.270 08:06:12 -- common/autotest_common.sh@924 -- # '[' -z 2083500 ']' 00:05:39.270 08:06:12 -- common/autotest_common.sh@928 -- # kill -0 2083500 00:05:39.270 08:06:12 -- common/autotest_common.sh@929 -- # uname 00:05:39.270 08:06:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:39.270 08:06:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2083500 00:05:39.270 08:06:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:39.270 08:06:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:39.270 08:06:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2083500' 00:05:39.270 killing process with pid 2083500 00:05:39.270 08:06:12 -- common/autotest_common.sh@943 -- # kill 2083500 00:05:39.270 08:06:12 -- common/autotest_common.sh@948 -- # wait 2083500 00:05:39.839 08:06:13 -- event/cpu_locks.sh@16 -- # [[ -z 2083735 ]] 00:05:39.839 08:06:13 -- event/cpu_locks.sh@16 -- # killprocess 2083735 00:05:39.839 08:06:13 -- common/autotest_common.sh@924 -- # '[' -z 2083735 ']' 00:05:39.839 08:06:13 -- common/autotest_common.sh@928 -- # kill -0 2083735 00:05:39.839 08:06:13 -- common/autotest_common.sh@929 -- # uname 00:05:39.839 08:06:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:39.839 08:06:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2083735 00:05:39.839 08:06:13 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:05:39.839 08:06:13 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:05:39.839 08:06:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2083735' 00:05:39.839 killing process with pid 2083735 00:05:39.839 08:06:13 -- common/autotest_common.sh@943 -- # kill 2083735 00:05:39.839 08:06:13 -- common/autotest_common.sh@948 -- # wait 2083735 00:05:40.099 08:06:13 -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.099 08:06:13 -- event/cpu_locks.sh@1 -- # cleanup 00:05:40.099 08:06:13 -- event/cpu_locks.sh@15 -- # [[ -z 2083500 ]] 00:05:40.099 08:06:13 -- event/cpu_locks.sh@15 -- # killprocess 2083500 00:05:40.099 08:06:13 -- common/autotest_common.sh@924 -- # '[' -z 2083500 ']' 00:05:40.099 08:06:13 -- common/autotest_common.sh@928 -- # kill -0 2083500 00:05:40.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2083500) - No such process 00:05:40.099 08:06:13 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2083500 is not found' 00:05:40.099 Process with pid 2083500 is not found 00:05:40.099 08:06:13 -- event/cpu_locks.sh@16 -- # [[ -z 2083735 ]] 00:05:40.099 08:06:13 -- event/cpu_locks.sh@16 -- # killprocess 2083735 00:05:40.099 08:06:13 -- common/autotest_common.sh@924 -- # '[' -z 2083735 ']' 00:05:40.099 08:06:13 -- common/autotest_common.sh@928 -- # kill -0 2083735 00:05:40.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2083735) - No such process 00:05:40.099 08:06:13 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2083735 is not found' 00:05:40.099 Process with pid 2083735 is not found 00:05:40.099 08:06:13 -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.099 00:05:40.099 real 0m16.800s 00:05:40.099 user 0m29.142s 00:05:40.099 sys 0m4.666s 00:05:40.099 08:06:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.099 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 END TEST cpu_locks 00:05:40.099 ************************************ 00:05:40.099 00:05:40.099 real 0m41.567s 00:05:40.099 user 1m19.464s 00:05:40.099 sys 0m7.750s 00:05:40.099 08:06:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.099 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 END TEST event 00:05:40.099 ************************************ 00:05:40.099 08:06:13 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.099 08:06:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:40.099 08:06:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:40.099 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.099 ************************************ 00:05:40.099 START TEST thread 00:05:40.099 ************************************ 00:05:40.099 08:06:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.099 * Looking for test storage... 00:05:40.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:40.099 08:06:13 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.099 08:06:13 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:40.099 08:06:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:40.359 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.359 ************************************ 00:05:40.359 START TEST thread_poller_perf 00:05:40.359 ************************************ 00:05:40.359 08:06:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.359 [2024-02-13 08:06:13.813049] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:40.359 [2024-02-13 08:06:13.813129] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084168 ] 00:05:40.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.359 [2024-02-13 08:06:13.877123] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.359 [2024-02-13 08:06:13.946338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.359 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.736 ====================================== 00:05:41.736 busy:2105272658 (cyc) 00:05:41.736 total_run_count: 400000 00:05:41.736 tsc_hz: 2100000000 (cyc) 00:05:41.736 ====================================== 00:05:41.736 poller_cost: 5263 (cyc), 2506 (nsec) 00:05:41.736 00:05:41.736 real 0m1.243s 00:05:41.736 user 0m1.166s 00:05:41.736 sys 0m0.072s 00:05:41.736 08:06:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.736 08:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.736 ************************************ 00:05:41.736 END TEST thread_poller_perf 00:05:41.736 ************************************ 00:05:41.736 08:06:15 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.736 08:06:15 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:41.736 08:06:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:41.736 08:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.736 ************************************ 00:05:41.736 START TEST thread_poller_perf 00:05:41.736 ************************************ 00:05:41.736 08:06:15 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.736 [2024-02-13 08:06:15.096126] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:41.736 [2024-02-13 08:06:15.096205] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084358 ] 00:05:41.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.736 [2024-02-13 08:06:15.158688] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.736 [2024-02-13 08:06:15.228615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.736 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.669 ====================================== 00:05:42.669 busy:2101984962 (cyc) 00:05:42.669 total_run_count: 5627000 00:05:42.669 tsc_hz: 2100000000 (cyc) 00:05:42.669 ====================================== 00:05:42.669 poller_cost: 373 (cyc), 177 (nsec) 00:05:42.669 00:05:42.669 real 0m1.236s 00:05:42.669 user 0m1.157s 00:05:42.669 sys 0m0.075s 00:05:42.669 08:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.669 08:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.669 ************************************ 00:05:42.669 END TEST thread_poller_perf 00:05:42.669 ************************************ 00:05:42.669 08:06:16 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.669 00:05:42.669 real 0m2.642s 00:05:42.669 user 0m2.386s 00:05:42.669 sys 0m0.268s 00:05:42.669 08:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.669 08:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.669 ************************************ 00:05:42.669 END TEST thread 00:05:42.669 ************************************ 00:05:42.928 08:06:16 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:42.928 08:06:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:42.928 08:06:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:42.928 08:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.928 ************************************ 00:05:42.928 START TEST accel 00:05:42.928 ************************************ 00:05:42.928 08:06:16 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:42.928 * Looking for test storage... 00:05:42.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:42.928 08:06:16 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:42.928 08:06:16 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:42.928 08:06:16 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.928 08:06:16 -- accel/accel.sh@59 -- # spdk_tgt_pid=2084640 00:05:42.928 08:06:16 -- accel/accel.sh@60 -- # waitforlisten 2084640 00:05:42.928 08:06:16 -- common/autotest_common.sh@817 -- # '[' -z 2084640 ']' 00:05:42.928 08:06:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.928 08:06:16 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:42.928 08:06:16 -- accel/accel.sh@58 -- # build_accel_config 00:05:42.928 08:06:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.928 08:06:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.928 08:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.928 08:06:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.928 08:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.928 08:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.928 08:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.928 08:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.928 08:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.928 08:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.928 08:06:16 -- accel/accel.sh@42 -- # jq -r . 00:05:42.928 [2024-02-13 08:06:16.500884] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:42.928 [2024-02-13 08:06:16.500949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084640 ] 00:05:42.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.928 [2024-02-13 08:06:16.557865] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.186 [2024-02-13 08:06:16.631253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.186 [2024-02-13 08:06:16.631360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.186 [2024-02-13 08:06:16.631382] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:43.752 08:06:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.752 08:06:17 -- common/autotest_common.sh@850 -- # return 0 00:05:43.752 08:06:17 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:43.752 08:06:17 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:43.752 08:06:17 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:43.752 08:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:43.752 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:43.752 08:06:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # IFS== 00:05:43.752 08:06:17 -- accel/accel.sh@64 -- # read -r opc module 00:05:43.752 08:06:17 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:43.752 08:06:17 -- accel/accel.sh@67 -- # killprocess 2084640 00:05:43.752 08:06:17 -- common/autotest_common.sh@924 -- # '[' -z 2084640 ']' 00:05:43.752 08:06:17 -- common/autotest_common.sh@928 -- # kill -0 2084640 00:05:43.752 08:06:17 -- common/autotest_common.sh@929 -- # uname 00:05:43.752 08:06:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:05:43.752 08:06:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2084640 00:05:43.752 08:06:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:05:43.752 08:06:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:05:43.752 08:06:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2084640' 00:05:43.752 killing process with pid 2084640 00:05:43.752 08:06:17 -- common/autotest_common.sh@943 -- # kill 2084640 00:05:43.752 [2024-02-13 08:06:17.393335] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:43.752 08:06:17 -- common/autotest_common.sh@948 -- # wait 2084640 00:05:44.048 08:06:17 -- accel/accel.sh@68 -- # trap - ERR 00:05:44.048 08:06:17 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:44.048 08:06:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:05:44.048 08:06:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:44.048 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.048 08:06:17 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:05:44.048 08:06:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:44.048 08:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.048 08:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.048 08:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.308 08:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.308 08:06:17 -- accel/accel.sh@42 -- # jq -r . 00:05:44.308 08:06:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.308 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.308 08:06:17 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:44.308 08:06:17 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:44.308 08:06:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:44.308 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:05:44.308 ************************************ 00:05:44.308 START TEST accel_missing_filename 00:05:44.308 ************************************ 00:05:44.308 08:06:17 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:05:44.308 08:06:17 -- common/autotest_common.sh@638 -- # local es=0 00:05:44.308 08:06:17 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:44.308 08:06:17 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:44.308 08:06:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.308 08:06:17 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:44.308 08:06:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.308 08:06:17 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:44.308 08:06:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:44.308 08:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.308 08:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.308 08:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.308 08:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.308 08:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.308 08:06:17 -- accel/accel.sh@42 -- # jq -r . 00:05:44.308 [2024-02-13 08:06:17.825182] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:44.308 [2024-02-13 08:06:17.825259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084912 ] 00:05:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.308 [2024-02-13 08:06:17.888424] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.308 [2024-02-13 08:06:17.958232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.308 [2024-02-13 08:06:17.958288] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.568 [2024-02-13 08:06:17.998372] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.568 [2024-02-13 08:06:17.998414] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:44.568 [2024-02-13 08:06:18.058724] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:44.568 A filename is required. 00:05:44.568 08:06:18 -- common/autotest_common.sh@641 -- # es=234 00:05:44.568 08:06:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:44.568 08:06:18 -- common/autotest_common.sh@650 -- # es=106 00:05:44.568 08:06:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:44.568 08:06:18 -- common/autotest_common.sh@658 -- # es=1 00:05:44.568 08:06:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:44.568 00:05:44.568 real 0m0.354s 00:05:44.568 user 0m0.271s 00:05:44.568 sys 0m0.121s 00:05:44.568 08:06:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.568 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.568 ************************************ 00:05:44.568 END TEST accel_missing_filename 00:05:44.568 ************************************ 00:05:44.568 08:06:18 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.568 08:06:18 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:44.568 08:06:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:44.568 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.568 ************************************ 00:05:44.568 START TEST accel_compress_verify 00:05:44.568 ************************************ 00:05:44.568 08:06:18 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.568 08:06:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:44.568 08:06:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.568 08:06:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:44.568 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.568 08:06:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:44.568 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:44.568 08:06:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.568 08:06:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.568 08:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.568 08:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.568 08:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.568 08:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.568 08:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.568 08:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.568 08:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.568 08:06:18 -- accel/accel.sh@42 -- # jq -r . 00:05:44.568 [2024-02-13 08:06:18.216580] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:44.568 [2024-02-13 08:06:18.216663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085109 ] 00:05:44.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.827 [2024-02-13 08:06:18.281036] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.827 [2024-02-13 08:06:18.353246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.827 [2024-02-13 08:06:18.353298] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.827 [2024-02-13 08:06:18.394287] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.827 [2024-02-13 08:06:18.394326] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:44.827 [2024-02-13 08:06:18.454013] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:45.087 00:05:45.087 Compression does not support the verify option, aborting. 00:05:45.087 08:06:18 -- common/autotest_common.sh@641 -- # es=161 00:05:45.087 08:06:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.087 08:06:18 -- common/autotest_common.sh@650 -- # es=33 00:05:45.087 08:06:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:45.087 08:06:18 -- common/autotest_common.sh@658 -- # es=1 00:05:45.087 08:06:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.087 00:05:45.087 real 0m0.360s 00:05:45.087 user 0m0.278s 00:05:45.087 sys 0m0.119s 00:05:45.087 08:06:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.087 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.087 ************************************ 00:05:45.087 END TEST accel_compress_verify 00:05:45.087 ************************************ 00:05:45.087 08:06:18 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:45.087 08:06:18 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:45.087 08:06:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.087 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.087 ************************************ 00:05:45.087 START TEST accel_wrong_workload 00:05:45.087 ************************************ 00:05:45.087 08:06:18 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:05:45.087 08:06:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.087 08:06:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:45.087 08:06:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:45.087 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.087 08:06:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:45.087 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.087 08:06:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:45.087 08:06:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:45.087 08:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.087 08:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.087 08:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.087 08:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.087 08:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.087 08:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.087 08:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.087 08:06:18 -- accel/accel.sh@42 -- # jq -r . 00:05:45.087 Unsupported workload type: foobar 00:05:45.087 [2024-02-13 08:06:18.613151] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:45.087 accel_perf options: 00:05:45.087 [-h help message] 00:05:45.087 [-q queue depth per core] 00:05:45.087 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:45.087 [-T number of threads per core 00:05:45.087 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:45.087 [-t time in seconds] 00:05:45.087 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:45.087 [ dif_verify, , dif_generate, dif_generate_copy 00:05:45.087 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:45.087 [-l for compress/decompress workloads, name of uncompressed input file 00:05:45.087 [-S for crc32c workload, use this seed value (default 0) 00:05:45.087 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:45.087 [-f for fill workload, use this BYTE value (default 255) 00:05:45.087 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:45.087 [-y verify result if this switch is on] 00:05:45.087 [-a tasks to allocate per core (default: same value as -q)] 00:05:45.087 Can be used to spread operations across a wider range of memory. 00:05:45.087 08:06:18 -- common/autotest_common.sh@641 -- # es=1 00:05:45.087 08:06:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.087 08:06:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.087 08:06:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.087 00:05:45.087 real 0m0.034s 00:05:45.087 user 0m0.020s 00:05:45.087 sys 0m0.014s 00:05:45.087 08:06:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.088 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.088 ************************************ 00:05:45.088 END TEST accel_wrong_workload 00:05:45.088 ************************************ 00:05:45.088 Error: writing output failed: Broken pipe 00:05:45.088 08:06:18 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:45.088 08:06:18 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:05:45.088 08:06:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.088 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.088 ************************************ 00:05:45.088 START TEST accel_negative_buffers 00:05:45.088 ************************************ 00:05:45.088 08:06:18 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:45.088 08:06:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.088 08:06:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:45.088 08:06:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:45.088 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.088 08:06:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:45.088 08:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.088 08:06:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:45.088 08:06:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:45.088 08:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.088 08:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.088 08:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.088 08:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.088 08:06:18 -- accel/accel.sh@42 -- # jq -r . 00:05:45.088 -x option must be non-negative. 00:05:45.088 [2024-02-13 08:06:18.684740] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:45.088 accel_perf options: 00:05:45.088 [-h help message] 00:05:45.088 [-q queue depth per core] 00:05:45.088 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:45.088 [-T number of threads per core 00:05:45.088 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:45.088 [-t time in seconds] 00:05:45.088 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:45.088 [ dif_verify, , dif_generate, dif_generate_copy 00:05:45.088 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:45.088 [-l for compress/decompress workloads, name of uncompressed input file 00:05:45.088 [-S for crc32c workload, use this seed value (default 0) 00:05:45.088 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:45.088 [-f for fill workload, use this BYTE value (default 255) 00:05:45.088 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:45.088 [-y verify result if this switch is on] 00:05:45.088 [-a tasks to allocate per core (default: same value as -q)] 00:05:45.088 Can be used to spread operations across a wider range of memory. 00:05:45.088 08:06:18 -- common/autotest_common.sh@641 -- # es=1 00:05:45.088 08:06:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.088 08:06:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.088 08:06:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.088 00:05:45.088 real 0m0.034s 00:05:45.088 user 0m0.018s 00:05:45.088 sys 0m0.015s 00:05:45.088 08:06:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.088 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.088 ************************************ 00:05:45.088 END TEST accel_negative_buffers 00:05:45.088 ************************************ 00:05:45.088 Error: writing output failed: Broken pipe 00:05:45.088 08:06:18 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:45.088 08:06:18 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:45.088 08:06:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:45.088 08:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.088 ************************************ 00:05:45.088 START TEST accel_crc32c 00:05:45.088 ************************************ 00:05:45.088 08:06:18 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:45.088 08:06:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.088 08:06:18 -- accel/accel.sh@17 -- # local accel_module 00:05:45.088 08:06:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:45.088 08:06:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:45.088 08:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.088 08:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.088 08:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.088 08:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.088 08:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.088 08:06:18 -- accel/accel.sh@42 -- # jq -r . 00:05:45.088 [2024-02-13 08:06:18.755007] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:45.088 [2024-02-13 08:06:18.755072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085170 ] 00:05:45.347 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.347 [2024-02-13 08:06:18.819942] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.347 [2024-02-13 08:06:18.898608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.347 [2024-02-13 08:06:18.898668] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.282 [2024-02-13 08:06:19.944134] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:46.540 08:06:20 -- accel/accel.sh@18 -- # out=' 00:05:46.540 SPDK Configuration: 00:05:46.540 Core mask: 0x1 00:05:46.540 00:05:46.540 Accel Perf Configuration: 00:05:46.540 Workload Type: crc32c 00:05:46.540 CRC-32C seed: 32 00:05:46.540 Transfer size: 4096 bytes 00:05:46.540 Vector count 1 00:05:46.540 Module: software 00:05:46.540 Queue depth: 32 00:05:46.540 Allocate depth: 32 00:05:46.540 # threads/core: 1 00:05:46.540 Run time: 1 seconds 00:05:46.540 Verify: Yes 00:05:46.540 00:05:46.540 Running for 1 seconds... 00:05:46.540 00:05:46.540 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.540 ------------------------------------------------------------------------------------ 00:05:46.540 0,0 574880/s 2245 MiB/s 0 0 00:05:46.540 ==================================================================================== 00:05:46.540 Total 574880/s 2245 MiB/s 0 0' 00:05:46.540 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.540 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.540 08:06:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:46.541 08:06:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:46.541 08:06:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.541 08:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.541 08:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.541 08:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.541 08:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.541 08:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.541 08:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.541 08:06:20 -- accel/accel.sh@42 -- # jq -r . 00:05:46.541 [2024-02-13 08:06:20.124230] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:46.541 [2024-02-13 08:06:20.124313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085401 ] 00:05:46.541 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.541 [2024-02-13 08:06:20.183756] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.798 [2024-02-13 08:06:20.251960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.799 [2024-02-13 08:06:20.252007] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=0x1 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=crc32c 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=32 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=software 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=32 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=32 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=1 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val=Yes 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:46.799 08:06:20 -- accel/accel.sh@21 -- # val= 00:05:46.799 08:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:46.799 08:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:47.732 [2024-02-13 08:06:21.296602] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@21 -- # val= 00:05:47.990 08:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # IFS=: 00:05:47.990 08:06:21 -- accel/accel.sh@20 -- # read -r var val 00:05:47.990 08:06:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.990 08:06:21 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:47.990 08:06:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.990 00:05:47.990 real 0m2.723s 00:05:47.990 user 0m2.496s 00:05:47.990 sys 0m0.234s 00:05:47.990 08:06:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.990 08:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.990 ************************************ 00:05:47.990 END TEST accel_crc32c 00:05:47.990 ************************************ 00:05:47.990 08:06:21 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:47.990 08:06:21 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:47.990 08:06:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:47.990 08:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:47.990 ************************************ 00:05:47.990 START TEST accel_crc32c_C2 00:05:47.990 ************************************ 00:05:47.990 08:06:21 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:47.990 08:06:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.990 08:06:21 -- accel/accel.sh@17 -- # local accel_module 00:05:47.990 08:06:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:47.990 08:06:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:47.990 08:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.990 08:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.990 08:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.990 08:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.990 08:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.990 08:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.990 08:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.990 08:06:21 -- accel/accel.sh@42 -- # jq -r . 00:05:47.990 [2024-02-13 08:06:21.518237] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:47.990 [2024-02-13 08:06:21.518316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085648 ] 00:05:47.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.990 [2024-02-13 08:06:21.578429] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.990 [2024-02-13 08:06:21.646530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.990 [2024-02-13 08:06:21.646586] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:49.364 [2024-02-13 08:06:22.691447] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:49.364 08:06:22 -- accel/accel.sh@18 -- # out=' 00:05:49.364 SPDK Configuration: 00:05:49.364 Core mask: 0x1 00:05:49.364 00:05:49.364 Accel Perf Configuration: 00:05:49.364 Workload Type: crc32c 00:05:49.364 CRC-32C seed: 0 00:05:49.364 Transfer size: 4096 bytes 00:05:49.364 Vector count 2 00:05:49.364 Module: software 00:05:49.364 Queue depth: 32 00:05:49.364 Allocate depth: 32 00:05:49.364 # threads/core: 1 00:05:49.364 Run time: 1 seconds 00:05:49.364 Verify: Yes 00:05:49.364 00:05:49.364 Running for 1 seconds... 00:05:49.364 00:05:49.364 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.364 ------------------------------------------------------------------------------------ 00:05:49.364 0,0 455424/s 3558 MiB/s 0 0 00:05:49.364 ==================================================================================== 00:05:49.364 Total 455424/s 1779 MiB/s 0 0' 00:05:49.364 08:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:49.364 08:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:49.364 08:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.364 08:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.364 08:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.364 08:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.364 08:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.364 08:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.364 08:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.364 08:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:49.364 [2024-02-13 08:06:22.866016] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:49.364 [2024-02-13 08:06:22.866077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085883 ] 00:05:49.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.364 [2024-02-13 08:06:22.924199] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.364 [2024-02-13 08:06:22.990283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.364 [2024-02-13 08:06:22.990336] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val=0x1 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val=crc32c 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val=0 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.364 08:06:23 -- accel/accel.sh@21 -- # val=software 00:05:49.364 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.364 08:06:23 -- accel/accel.sh@23 -- # accel_module=software 00:05:49.364 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val=32 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val=32 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val=1 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val=Yes 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:49.365 08:06:23 -- accel/accel.sh@21 -- # val= 00:05:49.365 08:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:49.365 08:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 [2024-02-13 08:06:24.035061] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@21 -- # val= 00:05:50.739 08:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:50.739 08:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:50.739 08:06:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.739 08:06:24 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:50.739 08:06:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.739 00:05:50.739 real 0m2.700s 00:05:50.739 user 0m2.478s 00:05:50.739 sys 0m0.232s 00:05:50.739 08:06:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.739 08:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 ************************************ 00:05:50.739 END TEST accel_crc32c_C2 00:05:50.739 ************************************ 00:05:50.739 08:06:24 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:50.739 08:06:24 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:50.739 08:06:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:50.739 08:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 ************************************ 00:05:50.739 START TEST accel_copy 00:05:50.739 ************************************ 00:05:50.739 08:06:24 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:05:50.739 08:06:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.739 08:06:24 -- accel/accel.sh@17 -- # local accel_module 00:05:50.739 08:06:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:50.740 08:06:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:50.740 08:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.740 08:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.740 08:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.740 08:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.740 08:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.740 08:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.740 08:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.740 08:06:24 -- accel/accel.sh@42 -- # jq -r . 00:05:50.740 [2024-02-13 08:06:24.254515] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:50.740 [2024-02-13 08:06:24.254583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086129 ] 00:05:50.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.740 [2024-02-13 08:06:24.317050] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.740 [2024-02-13 08:06:24.384931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.740 [2024-02-13 08:06:24.384985] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:52.115 [2024-02-13 08:06:25.429966] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:52.115 08:06:25 -- accel/accel.sh@18 -- # out=' 00:05:52.115 SPDK Configuration: 00:05:52.115 Core mask: 0x1 00:05:52.115 00:05:52.115 Accel Perf Configuration: 00:05:52.115 Workload Type: copy 00:05:52.115 Transfer size: 4096 bytes 00:05:52.115 Vector count 1 00:05:52.115 Module: software 00:05:52.115 Queue depth: 32 00:05:52.115 Allocate depth: 32 00:05:52.115 # threads/core: 1 00:05:52.115 Run time: 1 seconds 00:05:52.115 Verify: Yes 00:05:52.115 00:05:52.115 Running for 1 seconds... 00:05:52.115 00:05:52.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.115 ------------------------------------------------------------------------------------ 00:05:52.115 0,0 431552/s 1685 MiB/s 0 0 00:05:52.115 ==================================================================================== 00:05:52.115 Total 431552/s 1685 MiB/s 0 0' 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.115 08:06:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:52.115 08:06:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:52.115 08:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.115 08:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.115 08:06:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.115 08:06:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.115 08:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.115 08:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.115 08:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.115 08:06:25 -- accel/accel.sh@42 -- # jq -r . 00:05:52.115 [2024-02-13 08:06:25.605440] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:52.115 [2024-02-13 08:06:25.605501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086361 ] 00:05:52.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.115 [2024-02-13 08:06:25.663556] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.115 [2024-02-13 08:06:25.730167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.115 [2024-02-13 08:06:25.730222] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:52.115 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.115 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.115 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.115 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.115 08:06:25 -- accel/accel.sh@21 -- # val=0x1 00:05:52.115 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.115 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.115 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=copy 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=software 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=32 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=32 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=1 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val=Yes 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:52.116 08:06:25 -- accel/accel.sh@21 -- # val= 00:05:52.116 08:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:52.116 08:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 [2024-02-13 08:06:26.774049] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@21 -- # val= 00:05:53.492 08:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:53.492 08:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:53.492 08:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.492 08:06:26 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:53.492 08:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.492 00:05:53.492 real 0m2.701s 00:05:53.492 user 0m2.481s 00:05:53.492 sys 0m0.228s 00:05:53.492 08:06:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.492 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.492 ************************************ 00:05:53.492 END TEST accel_copy 00:05:53.492 ************************************ 00:05:53.492 08:06:26 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.492 08:06:26 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:05:53.492 08:06:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:53.492 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.492 ************************************ 00:05:53.492 START TEST accel_fill 00:05:53.492 ************************************ 00:05:53.492 08:06:26 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.492 08:06:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.492 08:06:26 -- accel/accel.sh@17 -- # local accel_module 00:05:53.492 08:06:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.492 08:06:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:53.492 08:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.492 08:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.492 08:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.492 08:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.492 08:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.492 08:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.492 08:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.492 08:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:53.492 [2024-02-13 08:06:26.993447] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:53.492 [2024-02-13 08:06:26.993512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086617 ] 00:05:53.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.492 [2024-02-13 08:06:27.055491] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.492 [2024-02-13 08:06:27.123097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.492 [2024-02-13 08:06:27.123149] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:54.868 [2024-02-13 08:06:28.167557] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:54.868 08:06:28 -- accel/accel.sh@18 -- # out=' 00:05:54.868 SPDK Configuration: 00:05:54.868 Core mask: 0x1 00:05:54.868 00:05:54.868 Accel Perf Configuration: 00:05:54.868 Workload Type: fill 00:05:54.868 Fill pattern: 0x80 00:05:54.868 Transfer size: 4096 bytes 00:05:54.868 Vector count 1 00:05:54.868 Module: software 00:05:54.868 Queue depth: 64 00:05:54.868 Allocate depth: 64 00:05:54.868 # threads/core: 1 00:05:54.868 Run time: 1 seconds 00:05:54.868 Verify: Yes 00:05:54.868 00:05:54.868 Running for 1 seconds... 00:05:54.868 00:05:54.868 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.868 ------------------------------------------------------------------------------------ 00:05:54.868 0,0 667264/s 2606 MiB/s 0 0 00:05:54.868 ==================================================================================== 00:05:54.868 Total 667264/s 2606 MiB/s 0 0' 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.868 08:06:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:54.868 08:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.868 08:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.868 08:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.868 08:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.868 08:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.868 08:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.868 08:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.868 08:06:28 -- accel/accel.sh@42 -- # jq -r . 00:05:54.868 [2024-02-13 08:06:28.343013] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:54.868 [2024-02-13 08:06:28.343096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086847 ] 00:05:54.868 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.868 [2024-02-13 08:06:28.402463] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.868 [2024-02-13 08:06:28.468832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.868 [2024-02-13 08:06:28.468888] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=0x1 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=fill 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=0x80 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=software 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=64 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=64 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=1 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val=Yes 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:54.868 08:06:28 -- accel/accel.sh@21 -- # val= 00:05:54.868 08:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:54.868 08:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 [2024-02-13 08:06:29.513606] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@21 -- # val= 00:05:56.246 08:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # IFS=: 00:05:56.246 08:06:29 -- accel/accel.sh@20 -- # read -r var val 00:05:56.246 08:06:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.246 08:06:29 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:56.246 08:06:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.246 00:05:56.246 real 0m2.702s 00:05:56.246 user 0m2.466s 00:05:56.246 sys 0m0.245s 00:05:56.246 08:06:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.246 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:56.246 ************************************ 00:05:56.246 END TEST accel_fill 00:05:56.246 ************************************ 00:05:56.246 08:06:29 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:56.246 08:06:29 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:56.246 08:06:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:56.246 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:56.246 ************************************ 00:05:56.246 START TEST accel_copy_crc32c 00:05:56.246 ************************************ 00:05:56.246 08:06:29 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:05:56.246 08:06:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.246 08:06:29 -- accel/accel.sh@17 -- # local accel_module 00:05:56.246 08:06:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:56.246 08:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:56.246 08:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.246 08:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.246 08:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.246 08:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.246 08:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.246 08:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.246 08:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.246 08:06:29 -- accel/accel.sh@42 -- # jq -r . 00:05:56.246 [2024-02-13 08:06:29.732577] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:56.246 [2024-02-13 08:06:29.732642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087094 ] 00:05:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.246 [2024-02-13 08:06:29.793745] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.246 [2024-02-13 08:06:29.861772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.246 [2024-02-13 08:06:29.861829] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.624 [2024-02-13 08:06:30.906241] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:57.624 08:06:31 -- accel/accel.sh@18 -- # out=' 00:05:57.624 SPDK Configuration: 00:05:57.624 Core mask: 0x1 00:05:57.624 00:05:57.624 Accel Perf Configuration: 00:05:57.624 Workload Type: copy_crc32c 00:05:57.624 CRC-32C seed: 0 00:05:57.624 Vector size: 4096 bytes 00:05:57.624 Transfer size: 4096 bytes 00:05:57.624 Vector count 1 00:05:57.624 Module: software 00:05:57.624 Queue depth: 32 00:05:57.624 Allocate depth: 32 00:05:57.624 # threads/core: 1 00:05:57.624 Run time: 1 seconds 00:05:57.624 Verify: Yes 00:05:57.624 00:05:57.624 Running for 1 seconds... 00:05:57.624 00:05:57.624 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.624 ------------------------------------------------------------------------------------ 00:05:57.624 0,0 320096/s 1250 MiB/s 0 0 00:05:57.624 ==================================================================================== 00:05:57.624 Total 320096/s 1250 MiB/s 0 0' 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:57.624 08:06:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:57.624 08:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.624 08:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.624 08:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.624 08:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.624 08:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.624 08:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.624 08:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.624 08:06:31 -- accel/accel.sh@42 -- # jq -r . 00:05:57.624 [2024-02-13 08:06:31.084200] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:57.624 [2024-02-13 08:06:31.084280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087332 ] 00:05:57.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.624 [2024-02-13 08:06:31.143497] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.624 [2024-02-13 08:06:31.210091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.624 [2024-02-13 08:06:31.210141] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=0x1 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=0 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=software 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=32 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=32 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val=1 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.624 08:06:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.624 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.624 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.625 08:06:31 -- accel/accel.sh@21 -- # val=Yes 00:05:57.625 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.625 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.625 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:57.625 08:06:31 -- accel/accel.sh@21 -- # val= 00:05:57.625 08:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # IFS=: 00:05:57.625 08:06:31 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 [2024-02-13 08:06:32.254564] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@21 -- # val= 00:05:58.996 08:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:58.996 08:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:58.996 08:06:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.996 08:06:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:58.996 08:06:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.996 00:05:58.996 real 0m2.703s 00:05:58.996 user 0m2.488s 00:05:58.996 sys 0m0.224s 00:05:58.996 08:06:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.996 08:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.996 ************************************ 00:05:58.996 END TEST accel_copy_crc32c 00:05:58.996 ************************************ 00:05:58.996 08:06:32 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:58.996 08:06:32 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:58.996 08:06:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:58.996 08:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.996 ************************************ 00:05:58.996 START TEST accel_copy_crc32c_C2 00:05:58.996 ************************************ 00:05:58.996 08:06:32 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:58.996 08:06:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.996 08:06:32 -- accel/accel.sh@17 -- # local accel_module 00:05:58.996 08:06:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:58.996 08:06:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:58.996 08:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.996 08:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.996 08:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.996 08:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.996 08:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.996 08:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.996 08:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.996 08:06:32 -- accel/accel.sh@42 -- # jq -r . 00:05:58.996 [2024-02-13 08:06:32.474604] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:05:58.996 [2024-02-13 08:06:32.474669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087576 ] 00:05:58.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.996 [2024-02-13 08:06:32.537940] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.996 [2024-02-13 08:06:32.602798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.996 [2024-02-13 08:06:32.602853] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:00.372 [2024-02-13 08:06:33.647607] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:00.372 08:06:33 -- accel/accel.sh@18 -- # out=' 00:06:00.372 SPDK Configuration: 00:06:00.372 Core mask: 0x1 00:06:00.372 00:06:00.372 Accel Perf Configuration: 00:06:00.372 Workload Type: copy_crc32c 00:06:00.372 CRC-32C seed: 0 00:06:00.372 Vector size: 4096 bytes 00:06:00.372 Transfer size: 8192 bytes 00:06:00.372 Vector count 2 00:06:00.372 Module: software 00:06:00.372 Queue depth: 32 00:06:00.372 Allocate depth: 32 00:06:00.372 # threads/core: 1 00:06:00.372 Run time: 1 seconds 00:06:00.372 Verify: Yes 00:06:00.372 00:06:00.372 Running for 1 seconds... 00:06:00.372 00:06:00.372 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.372 ------------------------------------------------------------------------------------ 00:06:00.372 0,0 240704/s 1880 MiB/s 0 0 00:06:00.372 ==================================================================================== 00:06:00.372 Total 240704/s 940 MiB/s 0 0' 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:00.372 08:06:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:00.372 08:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.372 08:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.372 08:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.372 08:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.372 08:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.372 08:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.372 08:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.372 08:06:33 -- accel/accel.sh@42 -- # jq -r . 00:06:00.372 [2024-02-13 08:06:33.824492] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:00.372 [2024-02-13 08:06:33.824572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087807 ] 00:06:00.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.372 [2024-02-13 08:06:33.885241] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.372 [2024-02-13 08:06:33.951981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.372 [2024-02-13 08:06:33.952034] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val= 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val= 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val=0x1 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val= 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val= 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.372 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.372 08:06:33 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:00.372 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:33 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:33 -- accel/accel.sh@21 -- # val=0 00:06:00.373 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.373 08:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val= 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val=software 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val=32 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val=32 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val=1 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val=Yes 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val= 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:00.373 08:06:34 -- accel/accel.sh@21 -- # val= 00:06:00.373 08:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:00.373 08:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:01.341 [2024-02-13 08:06:34.996527] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@21 -- # val= 00:06:01.611 08:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:01.611 08:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:01.611 08:06:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.611 08:06:35 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:01.611 08:06:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.611 00:06:01.611 real 0m2.704s 00:06:01.611 user 0m2.488s 00:06:01.611 sys 0m0.226s 00:06:01.611 08:06:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.611 08:06:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.611 ************************************ 00:06:01.611 END TEST accel_copy_crc32c_C2 00:06:01.611 ************************************ 00:06:01.611 08:06:35 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:01.611 08:06:35 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:01.611 08:06:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:01.611 08:06:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.611 ************************************ 00:06:01.611 START TEST accel_dualcast 00:06:01.611 ************************************ 00:06:01.611 08:06:35 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:06:01.611 08:06:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.611 08:06:35 -- accel/accel.sh@17 -- # local accel_module 00:06:01.611 08:06:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:01.611 08:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:01.611 08:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.611 08:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.611 08:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.611 08:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.611 08:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.611 08:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.611 08:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.611 08:06:35 -- accel/accel.sh@42 -- # jq -r . 00:06:01.611 [2024-02-13 08:06:35.216515] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:01.611 [2024-02-13 08:06:35.216581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088061 ] 00:06:01.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.611 [2024-02-13 08:06:35.278439] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.869 [2024-02-13 08:06:35.347405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.870 [2024-02-13 08:06:35.347457] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:02.806 [2024-02-13 08:06:36.391736] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:03.065 08:06:36 -- accel/accel.sh@18 -- # out=' 00:06:03.065 SPDK Configuration: 00:06:03.065 Core mask: 0x1 00:06:03.065 00:06:03.065 Accel Perf Configuration: 00:06:03.065 Workload Type: dualcast 00:06:03.065 Transfer size: 4096 bytes 00:06:03.065 Vector count 1 00:06:03.065 Module: software 00:06:03.065 Queue depth: 32 00:06:03.065 Allocate depth: 32 00:06:03.065 # threads/core: 1 00:06:03.065 Run time: 1 seconds 00:06:03.065 Verify: Yes 00:06:03.065 00:06:03.065 Running for 1 seconds... 00:06:03.065 00:06:03.065 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.065 ------------------------------------------------------------------------------------ 00:06:03.065 0,0 501408/s 1958 MiB/s 0 0 00:06:03.065 ==================================================================================== 00:06:03.065 Total 501408/s 1958 MiB/s 0 0' 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:03.065 08:06:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:03.065 08:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.065 08:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.065 08:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.065 08:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.065 08:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.065 08:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.065 08:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.065 08:06:36 -- accel/accel.sh@42 -- # jq -r . 00:06:03.065 [2024-02-13 08:06:36.570059] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:03.065 [2024-02-13 08:06:36.570134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088293 ] 00:06:03.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.065 [2024-02-13 08:06:36.633893] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.065 [2024-02-13 08:06:36.698032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.065 [2024-02-13 08:06:36.698086] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=0x1 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=dualcast 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=software 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=32 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=32 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=1 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val=Yes 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.065 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.065 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:03.065 08:06:36 -- accel/accel.sh@21 -- # val= 00:06:03.324 08:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.324 08:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:03.324 08:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 [2024-02-13 08:06:37.742544] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@21 -- # val= 00:06:04.261 08:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:04.261 08:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:04.261 08:06:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.261 08:06:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:04.261 08:06:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.261 00:06:04.261 real 0m2.708s 00:06:04.261 user 0m2.481s 00:06:04.261 sys 0m0.234s 00:06:04.261 08:06:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.261 08:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:04.261 ************************************ 00:06:04.261 END TEST accel_dualcast 00:06:04.261 ************************************ 00:06:04.261 08:06:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:04.261 08:06:37 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:04.261 08:06:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:04.261 08:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:04.261 ************************************ 00:06:04.261 START TEST accel_compare 00:06:04.261 ************************************ 00:06:04.261 08:06:37 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:06:04.261 08:06:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.261 08:06:37 -- accel/accel.sh@17 -- # local accel_module 00:06:04.261 08:06:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:04.261 08:06:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:04.261 08:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.261 08:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.261 08:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.261 08:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.261 08:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.261 08:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.261 08:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.261 08:06:37 -- accel/accel.sh@42 -- # jq -r . 00:06:04.521 [2024-02-13 08:06:37.962760] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:04.521 [2024-02-13 08:06:37.962836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088547 ] 00:06:04.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.521 [2024-02-13 08:06:38.022386] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.521 [2024-02-13 08:06:38.089917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.521 [2024-02-13 08:06:38.089970] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:05.458 [2024-02-13 08:06:39.133778] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:05.717 08:06:39 -- accel/accel.sh@18 -- # out=' 00:06:05.717 SPDK Configuration: 00:06:05.717 Core mask: 0x1 00:06:05.717 00:06:05.717 Accel Perf Configuration: 00:06:05.717 Workload Type: compare 00:06:05.717 Transfer size: 4096 bytes 00:06:05.717 Vector count 1 00:06:05.717 Module: software 00:06:05.717 Queue depth: 32 00:06:05.717 Allocate depth: 32 00:06:05.717 # threads/core: 1 00:06:05.717 Run time: 1 seconds 00:06:05.717 Verify: Yes 00:06:05.717 00:06:05.717 Running for 1 seconds... 00:06:05.717 00:06:05.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.717 ------------------------------------------------------------------------------------ 00:06:05.717 0,0 623072/s 2433 MiB/s 0 0 00:06:05.717 ==================================================================================== 00:06:05.717 Total 623072/s 2433 MiB/s 0 0' 00:06:05.717 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.717 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.717 08:06:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:05.717 08:06:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:05.717 08:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.717 08:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.717 08:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.717 08:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.717 08:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.717 08:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.717 08:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.717 08:06:39 -- accel/accel.sh@42 -- # jq -r . 00:06:05.717 [2024-02-13 08:06:39.311411] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:05.717 [2024-02-13 08:06:39.311482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088778 ] 00:06:05.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.717 [2024-02-13 08:06:39.373372] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.976 [2024-02-13 08:06:39.442375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.976 [2024-02-13 08:06:39.442427] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=0x1 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=compare 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=software 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=32 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=32 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=1 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val=Yes 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:05.976 08:06:39 -- accel/accel.sh@21 -- # val= 00:06:05.976 08:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:05.976 08:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:06.909 [2024-02-13 08:06:40.486837] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@21 -- # val= 00:06:07.168 08:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # IFS=: 00:06:07.168 08:06:40 -- accel/accel.sh@20 -- # read -r var val 00:06:07.168 08:06:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.168 08:06:40 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:07.168 08:06:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.168 00:06:07.168 real 0m2.706s 00:06:07.168 user 0m2.487s 00:06:07.168 sys 0m0.226s 00:06:07.168 08:06:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.168 08:06:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.168 ************************************ 00:06:07.168 END TEST accel_compare 00:06:07.168 ************************************ 00:06:07.168 08:06:40 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:07.168 08:06:40 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:06:07.168 08:06:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:07.168 08:06:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.168 ************************************ 00:06:07.168 START TEST accel_xor 00:06:07.168 ************************************ 00:06:07.168 08:06:40 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:06:07.168 08:06:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.168 08:06:40 -- accel/accel.sh@17 -- # local accel_module 00:06:07.168 08:06:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:07.168 08:06:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:07.168 08:06:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.168 08:06:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.168 08:06:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.168 08:06:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.168 08:06:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.168 08:06:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.168 08:06:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.168 08:06:40 -- accel/accel.sh@42 -- # jq -r . 00:06:07.168 [2024-02-13 08:06:40.705155] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:07.168 [2024-02-13 08:06:40.705222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089026 ] 00:06:07.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.168 [2024-02-13 08:06:40.766946] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.168 [2024-02-13 08:06:40.834607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.168 [2024-02-13 08:06:40.834666] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:08.543 [2024-02-13 08:06:41.879433] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:08.543 08:06:42 -- accel/accel.sh@18 -- # out=' 00:06:08.543 SPDK Configuration: 00:06:08.543 Core mask: 0x1 00:06:08.543 00:06:08.543 Accel Perf Configuration: 00:06:08.543 Workload Type: xor 00:06:08.543 Source buffers: 2 00:06:08.543 Transfer size: 4096 bytes 00:06:08.543 Vector count 1 00:06:08.543 Module: software 00:06:08.543 Queue depth: 32 00:06:08.543 Allocate depth: 32 00:06:08.543 # threads/core: 1 00:06:08.543 Run time: 1 seconds 00:06:08.543 Verify: Yes 00:06:08.543 00:06:08.543 Running for 1 seconds... 00:06:08.543 00:06:08.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.543 ------------------------------------------------------------------------------------ 00:06:08.543 0,0 499168/s 1949 MiB/s 0 0 00:06:08.543 ==================================================================================== 00:06:08.543 Total 499168/s 1949 MiB/s 0 0' 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:08.543 08:06:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:08.543 08:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.543 08:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.543 08:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.543 08:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.543 08:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.543 08:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.543 08:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.543 08:06:42 -- accel/accel.sh@42 -- # jq -r . 00:06:08.543 [2024-02-13 08:06:42.056279] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:08.543 [2024-02-13 08:06:42.056358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089263 ] 00:06:08.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.543 [2024-02-13 08:06:42.115172] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.543 [2024-02-13 08:06:42.181110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.543 [2024-02-13 08:06:42.181166] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val=0x1 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val=xor 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val=2 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.543 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.543 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.543 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val=software 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val=32 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val=32 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val=1 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val=Yes 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:08.801 08:06:42 -- accel/accel.sh@21 -- # val= 00:06:08.801 08:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:08.801 08:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 [2024-02-13 08:06:43.225671] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@21 -- # val= 00:06:09.737 08:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:09.737 08:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:09.737 08:06:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.737 08:06:43 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:09.737 08:06:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.737 00:06:09.737 real 0m2.702s 00:06:09.737 user 0m2.487s 00:06:09.737 sys 0m0.223s 00:06:09.737 08:06:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.737 08:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.737 ************************************ 00:06:09.737 END TEST accel_xor 00:06:09.737 ************************************ 00:06:09.737 08:06:43 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:09.737 08:06:43 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:09.737 08:06:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:09.737 08:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.737 ************************************ 00:06:09.737 START TEST accel_xor 00:06:09.737 ************************************ 00:06:09.737 08:06:43 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:06:09.737 08:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.737 08:06:43 -- accel/accel.sh@17 -- # local accel_module 00:06:09.737 08:06:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:09.737 08:06:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:09.737 08:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.737 08:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.995 08:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.995 08:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.995 08:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.995 08:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.995 08:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.995 08:06:43 -- accel/accel.sh@42 -- # jq -r . 00:06:09.995 [2024-02-13 08:06:43.445437] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:09.995 [2024-02-13 08:06:43.445505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089507 ] 00:06:09.995 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.995 [2024-02-13 08:06:43.507750] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.995 [2024-02-13 08:06:43.575349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.995 [2024-02-13 08:06:43.575402] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:11.371 [2024-02-13 08:06:44.620161] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:11.371 08:06:44 -- accel/accel.sh@18 -- # out=' 00:06:11.371 SPDK Configuration: 00:06:11.371 Core mask: 0x1 00:06:11.371 00:06:11.371 Accel Perf Configuration: 00:06:11.371 Workload Type: xor 00:06:11.371 Source buffers: 3 00:06:11.371 Transfer size: 4096 bytes 00:06:11.371 Vector count 1 00:06:11.371 Module: software 00:06:11.371 Queue depth: 32 00:06:11.371 Allocate depth: 32 00:06:11.371 # threads/core: 1 00:06:11.371 Run time: 1 seconds 00:06:11.371 Verify: Yes 00:06:11.371 00:06:11.371 Running for 1 seconds... 00:06:11.371 00:06:11.371 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:11.371 ------------------------------------------------------------------------------------ 00:06:11.371 0,0 470912/s 1839 MiB/s 0 0 00:06:11.371 ==================================================================================== 00:06:11.371 Total 470912/s 1839 MiB/s 0 0' 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:11.371 08:06:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:11.371 08:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.371 08:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.371 08:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.371 08:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.371 08:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.371 08:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.371 08:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.371 08:06:44 -- accel/accel.sh@42 -- # jq -r . 00:06:11.371 [2024-02-13 08:06:44.796446] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:11.371 [2024-02-13 08:06:44.796505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089738 ] 00:06:11.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.371 [2024-02-13 08:06:44.855134] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.371 [2024-02-13 08:06:44.921222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.371 [2024-02-13 08:06:44.921276] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=0x1 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=xor 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=3 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=software 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=32 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=32 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=1 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val=Yes 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:11.371 08:06:44 -- accel/accel.sh@21 -- # val= 00:06:11.371 08:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:11.371 08:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:12.307 [2024-02-13 08:06:45.965618] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@21 -- # val= 00:06:12.565 08:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:12.565 08:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:12.565 08:06:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.565 08:06:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:12.565 08:06:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.565 00:06:12.565 real 0m2.703s 00:06:12.565 user 0m2.478s 00:06:12.565 sys 0m0.233s 00:06:12.565 08:06:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.565 08:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 END TEST accel_xor 00:06:12.565 ************************************ 00:06:12.565 08:06:46 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:12.565 08:06:46 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:12.565 08:06:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:12.565 08:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 START TEST accel_dif_verify 00:06:12.565 ************************************ 00:06:12.565 08:06:46 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:06:12.565 08:06:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.565 08:06:46 -- accel/accel.sh@17 -- # local accel_module 00:06:12.565 08:06:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:12.565 08:06:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:12.565 08:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.565 08:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.565 08:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.565 08:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.565 08:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.565 08:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.565 08:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.565 08:06:46 -- accel/accel.sh@42 -- # jq -r . 00:06:12.565 [2024-02-13 08:06:46.187763] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:12.565 [2024-02-13 08:06:46.187842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089991 ] 00:06:12.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.565 [2024-02-13 08:06:46.248666] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.824 [2024-02-13 08:06:46.316716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.824 [2024-02-13 08:06:46.316769] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:13.759 [2024-02-13 08:06:47.361323] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:14.018 08:06:47 -- accel/accel.sh@18 -- # out=' 00:06:14.018 SPDK Configuration: 00:06:14.018 Core mask: 0x1 00:06:14.018 00:06:14.018 Accel Perf Configuration: 00:06:14.018 Workload Type: dif_verify 00:06:14.018 Vector size: 4096 bytes 00:06:14.018 Transfer size: 4096 bytes 00:06:14.018 Block size: 512 bytes 00:06:14.018 Metadata size: 8 bytes 00:06:14.018 Vector count 1 00:06:14.018 Module: software 00:06:14.018 Queue depth: 32 00:06:14.018 Allocate depth: 32 00:06:14.018 # threads/core: 1 00:06:14.018 Run time: 1 seconds 00:06:14.018 Verify: No 00:06:14.018 00:06:14.018 Running for 1 seconds... 00:06:14.018 00:06:14.018 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.018 ------------------------------------------------------------------------------------ 00:06:14.018 0,0 134656/s 534 MiB/s 0 0 00:06:14.018 ==================================================================================== 00:06:14.018 Total 134656/s 526 MiB/s 0 0' 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.018 08:06:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:14.018 08:06:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:14.018 08:06:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.018 08:06:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.018 08:06:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.018 08:06:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.018 08:06:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.018 08:06:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.018 08:06:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.018 08:06:47 -- accel/accel.sh@42 -- # jq -r . 00:06:14.018 [2024-02-13 08:06:47.536762] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:14.018 [2024-02-13 08:06:47.536822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090222 ] 00:06:14.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.018 [2024-02-13 08:06:47.594954] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.018 [2024-02-13 08:06:47.661341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.018 [2024-02-13 08:06:47.661394] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:14.018 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.018 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.018 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.018 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.018 08:06:47 -- accel/accel.sh@21 -- # val=0x1 00:06:14.018 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.018 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.018 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=dif_verify 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=software 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=32 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=32 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=1 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val=No 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:14.277 08:06:47 -- accel/accel.sh@21 -- # val= 00:06:14.277 08:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # IFS=: 00:06:14.277 08:06:47 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 [2024-02-13 08:06:48.706364] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@21 -- # val= 00:06:15.212 08:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:15.212 08:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:15.212 08:06:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.212 08:06:48 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:15.212 08:06:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.212 00:06:15.212 real 0m2.702s 00:06:15.212 user 0m2.494s 00:06:15.212 sys 0m0.218s 00:06:15.212 08:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.212 08:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.212 ************************************ 00:06:15.212 END TEST accel_dif_verify 00:06:15.212 ************************************ 00:06:15.212 08:06:48 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:15.212 08:06:48 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:15.212 08:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.212 08:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.471 ************************************ 00:06:15.471 START TEST accel_dif_generate 00:06:15.471 ************************************ 00:06:15.471 08:06:48 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:06:15.471 08:06:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.471 08:06:48 -- accel/accel.sh@17 -- # local accel_module 00:06:15.471 08:06:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:15.471 08:06:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:15.471 08:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.471 08:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.471 08:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.471 08:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.471 08:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.471 08:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.471 08:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.471 08:06:48 -- accel/accel.sh@42 -- # jq -r . 00:06:15.471 [2024-02-13 08:06:48.927026] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:15.471 [2024-02-13 08:06:48.927099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090466 ] 00:06:15.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.471 [2024-02-13 08:06:48.988593] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.471 [2024-02-13 08:06:49.057602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.471 [2024-02-13 08:06:49.057661] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:16.848 [2024-02-13 08:06:50.102308] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:16.848 08:06:50 -- accel/accel.sh@18 -- # out=' 00:06:16.848 SPDK Configuration: 00:06:16.848 Core mask: 0x1 00:06:16.848 00:06:16.848 Accel Perf Configuration: 00:06:16.848 Workload Type: dif_generate 00:06:16.848 Vector size: 4096 bytes 00:06:16.848 Transfer size: 4096 bytes 00:06:16.848 Block size: 512 bytes 00:06:16.848 Metadata size: 8 bytes 00:06:16.848 Vector count 1 00:06:16.848 Module: software 00:06:16.848 Queue depth: 32 00:06:16.848 Allocate depth: 32 00:06:16.848 # threads/core: 1 00:06:16.848 Run time: 1 seconds 00:06:16.848 Verify: No 00:06:16.848 00:06:16.848 Running for 1 seconds... 00:06:16.848 00:06:16.848 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.848 ------------------------------------------------------------------------------------ 00:06:16.848 0,0 155872/s 618 MiB/s 0 0 00:06:16.848 ==================================================================================== 00:06:16.848 Total 155872/s 608 MiB/s 0 0' 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:16.848 08:06:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:16.848 08:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.848 08:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.848 08:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.848 08:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.848 08:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.848 08:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.848 08:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.848 08:06:50 -- accel/accel.sh@42 -- # jq -r . 00:06:16.848 [2024-02-13 08:06:50.277585] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:16.848 [2024-02-13 08:06:50.277656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090701 ] 00:06:16.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.848 [2024-02-13 08:06:50.336983] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.848 [2024-02-13 08:06:50.404367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.848 [2024-02-13 08:06:50.404421] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=0x1 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=dif_generate 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=software 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=32 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=32 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=1 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val=No 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:16.848 08:06:50 -- accel/accel.sh@21 -- # val= 00:06:16.848 08:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:16.848 08:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:17.815 [2024-02-13 08:06:51.448715] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@21 -- # val= 00:06:18.074 08:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # IFS=: 00:06:18.074 08:06:51 -- accel/accel.sh@20 -- # read -r var val 00:06:18.074 08:06:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.074 08:06:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:18.074 08:06:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.074 00:06:18.074 real 0m2.705s 00:06:18.074 user 0m2.488s 00:06:18.074 sys 0m0.225s 00:06:18.074 08:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.074 08:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.074 ************************************ 00:06:18.074 END TEST accel_dif_generate 00:06:18.074 ************************************ 00:06:18.074 08:06:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:18.074 08:06:51 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:06:18.074 08:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.074 08:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.074 ************************************ 00:06:18.074 START TEST accel_dif_generate_copy 00:06:18.074 ************************************ 00:06:18.074 08:06:51 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:06:18.074 08:06:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.074 08:06:51 -- accel/accel.sh@17 -- # local accel_module 00:06:18.074 08:06:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:18.074 08:06:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:18.074 08:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.074 08:06:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.074 08:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.074 08:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.074 08:06:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.074 08:06:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.074 08:06:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.074 08:06:51 -- accel/accel.sh@42 -- # jq -r . 00:06:18.074 [2024-02-13 08:06:51.670485] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:18.074 [2024-02-13 08:06:51.670548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090956 ] 00:06:18.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.074 [2024-02-13 08:06:51.729593] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.333 [2024-02-13 08:06:51.798132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.333 [2024-02-13 08:06:51.798183] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:19.268 [2024-02-13 08:06:52.842272] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:19.527 08:06:52 -- accel/accel.sh@18 -- # out=' 00:06:19.527 SPDK Configuration: 00:06:19.527 Core mask: 0x1 00:06:19.527 00:06:19.527 Accel Perf Configuration: 00:06:19.527 Workload Type: dif_generate_copy 00:06:19.527 Vector size: 4096 bytes 00:06:19.527 Transfer size: 4096 bytes 00:06:19.527 Vector count 1 00:06:19.527 Module: software 00:06:19.527 Queue depth: 32 00:06:19.527 Allocate depth: 32 00:06:19.527 # threads/core: 1 00:06:19.527 Run time: 1 seconds 00:06:19.527 Verify: No 00:06:19.527 00:06:19.527 Running for 1 seconds... 00:06:19.527 00:06:19.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.527 ------------------------------------------------------------------------------------ 00:06:19.527 0,0 125600/s 498 MiB/s 0 0 00:06:19.527 ==================================================================================== 00:06:19.527 Total 125600/s 490 MiB/s 0 0' 00:06:19.527 08:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:19.527 08:06:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:19.527 08:06:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.527 08:06:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.527 08:06:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.527 08:06:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.527 08:06:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.527 08:06:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.527 08:06:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.527 08:06:52 -- accel/accel.sh@42 -- # jq -r . 00:06:19.527 [2024-02-13 08:06:53.020058] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:19.527 [2024-02-13 08:06:53.020133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091185 ] 00:06:19.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.527 [2024-02-13 08:06:53.081588] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.527 [2024-02-13 08:06:53.148292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.527 [2024-02-13 08:06:53.148347] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=0x1 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=software 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=32 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=32 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=1 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val=No 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:19.527 08:06:53 -- accel/accel.sh@21 -- # val= 00:06:19.527 08:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:19.527 08:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 [2024-02-13 08:06:54.193125] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.903 08:06:54 -- accel/accel.sh@21 -- # val= 00:06:20.903 08:06:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:20.903 08:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:20.904 08:06:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.904 08:06:54 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:20.904 08:06:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.904 00:06:20.904 real 0m2.704s 00:06:20.904 user 0m2.475s 00:06:20.904 sys 0m0.237s 00:06:20.904 08:06:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.904 08:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.904 ************************************ 00:06:20.904 END TEST accel_dif_generate_copy 00:06:20.904 ************************************ 00:06:20.904 08:06:54 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:20.904 08:06:54 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.904 08:06:54 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:06:20.904 08:06:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.904 08:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.904 ************************************ 00:06:20.904 START TEST accel_comp 00:06:20.904 ************************************ 00:06:20.904 08:06:54 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.904 08:06:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.904 08:06:54 -- accel/accel.sh@17 -- # local accel_module 00:06:20.904 08:06:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.904 08:06:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.904 08:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.904 08:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.904 08:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.904 08:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.904 08:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.904 08:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.904 08:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.904 08:06:54 -- accel/accel.sh@42 -- # jq -r . 00:06:20.904 [2024-02-13 08:06:54.410628] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:20.904 [2024-02-13 08:06:54.410692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091438 ] 00:06:20.904 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.904 [2024-02-13 08:06:54.469182] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.904 [2024-02-13 08:06:54.536888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.904 [2024-02-13 08:06:54.536942] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:22.280 [2024-02-13 08:06:55.583563] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:22.280 08:06:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.280 00:06:22.280 SPDK Configuration: 00:06:22.280 Core mask: 0x1 00:06:22.280 00:06:22.280 Accel Perf Configuration: 00:06:22.280 Workload Type: compress 00:06:22.280 Transfer size: 4096 bytes 00:06:22.280 Vector count 1 00:06:22.280 Module: software 00:06:22.280 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.280 Queue depth: 32 00:06:22.280 Allocate depth: 32 00:06:22.280 # threads/core: 1 00:06:22.280 Run time: 1 seconds 00:06:22.280 Verify: No 00:06:22.280 00:06:22.280 Running for 1 seconds... 00:06:22.280 00:06:22.280 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.280 ------------------------------------------------------------------------------------ 00:06:22.280 0,0 64832/s 270 MiB/s 0 0 00:06:22.280 ==================================================================================== 00:06:22.280 Total 64832/s 253 MiB/s 0 0' 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.280 08:06:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.280 08:06:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.280 08:06:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.280 08:06:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.280 08:06:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.280 08:06:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.280 08:06:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.280 08:06:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.280 08:06:55 -- accel/accel.sh@42 -- # jq -r . 00:06:22.280 [2024-02-13 08:06:55.759816] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:22.280 [2024-02-13 08:06:55.759891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091669 ] 00:06:22.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.280 [2024-02-13 08:06:55.821868] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.280 [2024-02-13 08:06:55.889718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.280 [2024-02-13 08:06:55.889769] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=0x1 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=compress 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=software 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=32 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=32 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=1 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val=No 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:22.280 08:06:55 -- accel/accel.sh@21 -- # val= 00:06:22.280 08:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:22.280 08:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:23.655 [2024-02-13 08:06:56.936269] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:23.655 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.655 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.655 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.655 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.655 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.655 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.655 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.655 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.656 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.656 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.656 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.656 08:06:57 -- accel/accel.sh@21 -- # val= 00:06:23.656 08:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # IFS=: 00:06:23.656 08:06:57 -- accel/accel.sh@20 -- # read -r var val 00:06:23.656 08:06:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.656 08:06:57 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:23.656 08:06:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.656 00:06:23.656 real 0m2.707s 00:06:23.656 user 0m2.485s 00:06:23.656 sys 0m0.230s 00:06:23.656 08:06:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.656 08:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.656 ************************************ 00:06:23.656 END TEST accel_comp 00:06:23.656 ************************************ 00:06:23.656 08:06:57 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.656 08:06:57 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:06:23.656 08:06:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:23.656 08:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.656 ************************************ 00:06:23.656 START TEST accel_decomp 00:06:23.656 ************************************ 00:06:23.656 08:06:57 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.656 08:06:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.656 08:06:57 -- accel/accel.sh@17 -- # local accel_module 00:06:23.656 08:06:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.656 08:06:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.656 08:06:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.656 08:06:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.656 08:06:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.656 08:06:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.656 08:06:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.656 08:06:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.656 08:06:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.656 08:06:57 -- accel/accel.sh@42 -- # jq -r . 00:06:23.656 [2024-02-13 08:06:57.157128] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:23.656 [2024-02-13 08:06:57.157207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091916 ] 00:06:23.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.656 [2024-02-13 08:06:57.216613] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.656 [2024-02-13 08:06:57.284091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.656 [2024-02-13 08:06:57.284145] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:25.031 [2024-02-13 08:06:58.330566] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:25.031 08:06:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:25.031 00:06:25.031 SPDK Configuration: 00:06:25.031 Core mask: 0x1 00:06:25.031 00:06:25.031 Accel Perf Configuration: 00:06:25.031 Workload Type: decompress 00:06:25.031 Transfer size: 4096 bytes 00:06:25.031 Vector count 1 00:06:25.031 Module: software 00:06:25.031 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.031 Queue depth: 32 00:06:25.031 Allocate depth: 32 00:06:25.031 # threads/core: 1 00:06:25.031 Run time: 1 seconds 00:06:25.031 Verify: Yes 00:06:25.031 00:06:25.031 Running for 1 seconds... 00:06:25.031 00:06:25.031 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.031 ------------------------------------------------------------------------------------ 00:06:25.031 0,0 75456/s 139 MiB/s 0 0 00:06:25.031 ==================================================================================== 00:06:25.031 Total 75456/s 294 MiB/s 0 0' 00:06:25.031 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.031 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.031 08:06:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.031 08:06:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:25.031 08:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.031 08:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.031 08:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.031 08:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.031 08:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.031 08:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.031 08:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.032 08:06:58 -- accel/accel.sh@42 -- # jq -r . 00:06:25.032 [2024-02-13 08:06:58.508857] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:25.032 [2024-02-13 08:06:58.508938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092151 ] 00:06:25.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.032 [2024-02-13 08:06:58.569535] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.032 [2024-02-13 08:06:58.635498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.032 [2024-02-13 08:06:58.635551] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=0x1 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=decompress 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=software 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=32 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=32 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=1 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val=Yes 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:25.032 08:06:58 -- accel/accel.sh@21 -- # val= 00:06:25.032 08:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # IFS=: 00:06:25.032 08:06:58 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 [2024-02-13 08:06:59.681791] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@21 -- # val= 00:06:26.407 08:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # IFS=: 00:06:26.407 08:06:59 -- accel/accel.sh@20 -- # read -r var val 00:06:26.407 08:06:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.407 08:06:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.407 08:06:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.407 00:06:26.407 real 0m2.708s 00:06:26.407 user 0m2.482s 00:06:26.407 sys 0m0.234s 00:06:26.407 08:06:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.407 08:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.407 ************************************ 00:06:26.407 END TEST accel_decomp 00:06:26.407 ************************************ 00:06:26.407 08:06:59 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.407 08:06:59 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:26.407 08:06:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:26.407 08:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:26.407 ************************************ 00:06:26.407 START TEST accel_decmop_full 00:06:26.407 ************************************ 00:06:26.407 08:06:59 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.407 08:06:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.407 08:06:59 -- accel/accel.sh@17 -- # local accel_module 00:06:26.407 08:06:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.407 08:06:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:26.407 08:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.407 08:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.407 08:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.407 08:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.407 08:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.407 08:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.407 08:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.407 08:06:59 -- accel/accel.sh@42 -- # jq -r . 00:06:26.407 [2024-02-13 08:06:59.901995] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:26.408 [2024-02-13 08:06:59.902067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092397 ] 00:06:26.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.408 [2024-02-13 08:06:59.964506] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.408 [2024-02-13 08:07:00.035958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.408 [2024-02-13 08:07:00.036009] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:27.784 [2024-02-13 08:07:01.091770] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:27.784 08:07:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.784 00:06:27.784 SPDK Configuration: 00:06:27.784 Core mask: 0x1 00:06:27.784 00:06:27.784 Accel Perf Configuration: 00:06:27.784 Workload Type: decompress 00:06:27.784 Transfer size: 111250 bytes 00:06:27.784 Vector count 1 00:06:27.784 Module: software 00:06:27.784 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.784 Queue depth: 32 00:06:27.784 Allocate depth: 32 00:06:27.784 # threads/core: 1 00:06:27.784 Run time: 1 seconds 00:06:27.784 Verify: Yes 00:06:27.784 00:06:27.784 Running for 1 seconds... 00:06:27.784 00:06:27.784 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.784 ------------------------------------------------------------------------------------ 00:06:27.784 0,0 4896/s 202 MiB/s 0 0 00:06:27.784 ==================================================================================== 00:06:27.784 Total 4896/s 519 MiB/s 0 0' 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.784 08:07:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:27.784 08:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.784 08:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.784 08:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.784 08:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.784 08:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.784 08:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.784 08:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.784 08:07:01 -- accel/accel.sh@42 -- # jq -r . 00:06:27.784 [2024-02-13 08:07:01.269301] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:27.784 [2024-02-13 08:07:01.269379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092629 ] 00:06:27.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.784 [2024-02-13 08:07:01.330080] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.784 [2024-02-13 08:07:01.396528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.784 [2024-02-13 08:07:01.396582] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=0x1 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=decompress 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=software 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=32 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=32 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=1 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val=Yes 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:27.784 08:07:01 -- accel/accel.sh@21 -- # val= 00:06:27.784 08:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:27.784 08:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 [2024-02-13 08:07:02.452675] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@21 -- # val= 00:06:29.160 08:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:29.160 08:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:29.160 08:07:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.160 08:07:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.160 08:07:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.160 00:06:29.160 real 0m2.733s 00:06:29.160 user 0m2.503s 00:06:29.160 sys 0m0.237s 00:06:29.160 08:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.160 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.160 ************************************ 00:06:29.160 END TEST accel_decmop_full 00:06:29.160 ************************************ 00:06:29.160 08:07:02 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.160 08:07:02 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:29.160 08:07:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:29.160 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.160 ************************************ 00:06:29.160 START TEST accel_decomp_mcore 00:06:29.160 ************************************ 00:06:29.160 08:07:02 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.160 08:07:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.160 08:07:02 -- accel/accel.sh@17 -- # local accel_module 00:06:29.160 08:07:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.160 08:07:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:29.160 08:07:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.160 08:07:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.160 08:07:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.160 08:07:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.160 08:07:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.160 08:07:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.160 08:07:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.160 08:07:02 -- accel/accel.sh@42 -- # jq -r . 00:06:29.160 [2024-02-13 08:07:02.669838] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:29.160 [2024-02-13 08:07:02.669901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092881 ] 00:06:29.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.160 [2024-02-13 08:07:02.730537] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.160 [2024-02-13 08:07:02.800735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.160 [2024-02-13 08:07:02.800833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.160 [2024-02-13 08:07:02.800924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.160 [2024-02-13 08:07:02.800926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.160 [2024-02-13 08:07:02.801010] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:30.539 [2024-02-13 08:07:03.852353] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:30.539 08:07:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:30.539 00:06:30.539 SPDK Configuration: 00:06:30.539 Core mask: 0xf 00:06:30.539 00:06:30.539 Accel Perf Configuration: 00:06:30.539 Workload Type: decompress 00:06:30.539 Transfer size: 4096 bytes 00:06:30.539 Vector count 1 00:06:30.539 Module: software 00:06:30.539 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.539 Queue depth: 32 00:06:30.539 Allocate depth: 32 00:06:30.539 # threads/core: 1 00:06:30.539 Run time: 1 seconds 00:06:30.539 Verify: Yes 00:06:30.539 00:06:30.539 Running for 1 seconds... 00:06:30.539 00:06:30.539 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.539 ------------------------------------------------------------------------------------ 00:06:30.539 0,0 60992/s 112 MiB/s 0 0 00:06:30.539 3,0 63296/s 116 MiB/s 0 0 00:06:30.539 2,0 63296/s 116 MiB/s 0 0 00:06:30.539 1,0 63072/s 116 MiB/s 0 0 00:06:30.539 ==================================================================================== 00:06:30.539 Total 250656/s 979 MiB/s 0 0' 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.539 08:07:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.539 08:07:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.539 08:07:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.539 08:07:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.539 08:07:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.539 08:07:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.539 08:07:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.539 08:07:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.539 08:07:04 -- accel/accel.sh@42 -- # jq -r . 00:06:30.539 [2024-02-13 08:07:04.031173] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:30.539 [2024-02-13 08:07:04.031253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093113 ] 00:06:30.539 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.539 [2024-02-13 08:07:04.090150] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.539 [2024-02-13 08:07:04.159621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.539 [2024-02-13 08:07:04.159720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.539 [2024-02-13 08:07:04.159743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.539 [2024-02-13 08:07:04.159745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.539 [2024-02-13 08:07:04.159873] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val=0xf 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val=decompress 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val=software 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.539 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.539 08:07:04 -- accel/accel.sh@21 -- # val=32 00:06:30.539 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val=32 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val=1 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val=Yes 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:30.540 08:07:04 -- accel/accel.sh@21 -- # val= 00:06:30.540 08:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # IFS=: 00:06:30.540 08:07:04 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 [2024-02-13 08:07:05.210264] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@21 -- # val= 00:06:31.918 08:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:31.918 08:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:31.918 08:07:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.918 08:07:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.918 08:07:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.918 00:06:31.918 real 0m2.727s 00:06:31.918 user 0m9.140s 00:06:31.918 sys 0m0.251s 00:06:31.918 08:07:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.918 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 ************************************ 00:06:31.918 END TEST accel_decomp_mcore 00:06:31.918 ************************************ 00:06:31.918 08:07:05 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.918 08:07:05 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:31.918 08:07:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:31.918 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 ************************************ 00:06:31.918 START TEST accel_decomp_full_mcore 00:06:31.918 ************************************ 00:06:31.918 08:07:05 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.918 08:07:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.918 08:07:05 -- accel/accel.sh@17 -- # local accel_module 00:06:31.918 08:07:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.918 08:07:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.918 08:07:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.918 08:07:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.918 08:07:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.918 08:07:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.918 08:07:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.919 08:07:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.919 08:07:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.919 08:07:05 -- accel/accel.sh@42 -- # jq -r . 00:06:31.919 [2024-02-13 08:07:05.437194] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:31.919 [2024-02-13 08:07:05.437274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093366 ] 00:06:31.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.919 [2024-02-13 08:07:05.497641] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.919 [2024-02-13 08:07:05.567566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.919 [2024-02-13 08:07:05.567682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.919 [2024-02-13 08:07:05.567724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.919 [2024-02-13 08:07:05.567723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.919 [2024-02-13 08:07:05.567816] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:33.299 [2024-02-13 08:07:06.627530] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:33.299 08:07:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.299 00:06:33.299 SPDK Configuration: 00:06:33.299 Core mask: 0xf 00:06:33.299 00:06:33.299 Accel Perf Configuration: 00:06:33.299 Workload Type: decompress 00:06:33.299 Transfer size: 111250 bytes 00:06:33.299 Vector count 1 00:06:33.299 Module: software 00:06:33.299 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.299 Queue depth: 32 00:06:33.299 Allocate depth: 32 00:06:33.299 # threads/core: 1 00:06:33.299 Run time: 1 seconds 00:06:33.299 Verify: Yes 00:06:33.299 00:06:33.299 Running for 1 seconds... 00:06:33.299 00:06:33.299 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.299 ------------------------------------------------------------------------------------ 00:06:33.299 0,0 4640/s 191 MiB/s 0 0 00:06:33.299 3,0 4800/s 198 MiB/s 0 0 00:06:33.299 2,0 4800/s 198 MiB/s 0 0 00:06:33.299 1,0 4800/s 198 MiB/s 0 0 00:06:33.299 ==================================================================================== 00:06:33.299 Total 19040/s 2020 MiB/s 0 0' 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.299 08:07:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.299 08:07:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.299 08:07:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.299 08:07:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.299 08:07:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.299 08:07:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.299 08:07:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.299 08:07:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.299 08:07:06 -- accel/accel.sh@42 -- # jq -r . 00:06:33.299 [2024-02-13 08:07:06.805230] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:33.299 [2024-02-13 08:07:06.805293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093607 ] 00:06:33.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.299 [2024-02-13 08:07:06.865614] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.299 [2024-02-13 08:07:06.933801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.299 [2024-02-13 08:07:06.933901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.299 [2024-02-13 08:07:06.933990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.299 [2024-02-13 08:07:06.933992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.299 [2024-02-13 08:07:06.934076] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val=0xf 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val=decompress 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.299 08:07:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.299 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.299 08:07:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:33.299 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.300 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.300 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.300 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.300 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.300 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.300 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.300 08:07:06 -- accel/accel.sh@21 -- # val=software 00:06:33.300 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.300 08:07:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.559 08:07:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.559 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.559 08:07:06 -- accel/accel.sh@21 -- # val=32 00:06:33.559 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.559 08:07:06 -- accel/accel.sh@21 -- # val=32 00:06:33.559 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.559 08:07:06 -- accel/accel.sh@21 -- # val=1 00:06:33.559 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.559 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.560 08:07:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.560 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.560 08:07:06 -- accel/accel.sh@21 -- # val=Yes 00:06:33.560 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.560 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.560 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:33.560 08:07:06 -- accel/accel.sh@21 -- # val= 00:06:33.560 08:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # IFS=: 00:06:33.560 08:07:06 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 [2024-02-13 08:07:07.993663] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@21 -- # val= 00:06:34.499 08:07:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # IFS=: 00:06:34.499 08:07:08 -- accel/accel.sh@20 -- # read -r var val 00:06:34.499 08:07:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.499 08:07:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:34.499 08:07:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.499 00:06:34.499 real 0m2.743s 00:06:34.499 user 0m9.201s 00:06:34.499 sys 0m0.255s 00:06:34.499 08:07:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.499 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:06:34.499 ************************************ 00:06:34.499 END TEST accel_decomp_full_mcore 00:06:34.499 ************************************ 00:06:34.499 08:07:08 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.791 08:07:08 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:34.791 08:07:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:34.791 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 START TEST accel_decomp_mthread 00:06:34.791 ************************************ 00:06:34.791 08:07:08 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.791 08:07:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.791 08:07:08 -- accel/accel.sh@17 -- # local accel_module 00:06:34.791 08:07:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.791 08:07:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:34.791 08:07:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.791 08:07:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.791 08:07:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.791 08:07:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.791 08:07:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.791 08:07:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.791 08:07:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.791 08:07:08 -- accel/accel.sh@42 -- # jq -r . 00:06:34.791 [2024-02-13 08:07:08.217197] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:34.791 [2024-02-13 08:07:08.217254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093854 ] 00:06:34.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.791 [2024-02-13 08:07:08.275958] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.791 [2024-02-13 08:07:08.344166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.791 [2024-02-13 08:07:08.344222] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:35.729 [2024-02-13 08:07:09.394026] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:35.989 08:07:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:35.989 00:06:35.989 SPDK Configuration: 00:06:35.989 Core mask: 0x1 00:06:35.989 00:06:35.989 Accel Perf Configuration: 00:06:35.989 Workload Type: decompress 00:06:35.989 Transfer size: 4096 bytes 00:06:35.989 Vector count 1 00:06:35.989 Module: software 00:06:35.989 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.989 Queue depth: 32 00:06:35.989 Allocate depth: 32 00:06:35.989 # threads/core: 2 00:06:35.989 Run time: 1 seconds 00:06:35.989 Verify: Yes 00:06:35.989 00:06:35.989 Running for 1 seconds... 00:06:35.989 00:06:35.989 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.989 ------------------------------------------------------------------------------------ 00:06:35.989 0,1 37920/s 69 MiB/s 0 0 00:06:35.989 0,0 37792/s 69 MiB/s 0 0 00:06:35.989 ==================================================================================== 00:06:35.989 Total 75712/s 295 MiB/s 0 0' 00:06:35.989 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:35.989 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:35.989 08:07:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.989 08:07:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.989 08:07:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.989 08:07:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.989 08:07:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.989 08:07:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.989 08:07:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.989 08:07:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.989 08:07:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.989 08:07:09 -- accel/accel.sh@42 -- # jq -r . 00:06:35.989 [2024-02-13 08:07:09.570087] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:35.989 [2024-02-13 08:07:09.570166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094094 ] 00:06:35.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.989 [2024-02-13 08:07:09.628952] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.248 [2024-02-13 08:07:09.696799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.248 [2024-02-13 08:07:09.696844] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:36.248 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.248 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.248 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.248 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.248 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.248 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.248 08:07:09 -- accel/accel.sh@21 -- # val=0x1 00:06:36.248 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.248 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.248 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=decompress 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=software 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=32 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=32 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=2 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val=Yes 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:36.249 08:07:09 -- accel/accel.sh@21 -- # val= 00:06:36.249 08:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:36.249 08:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:37.187 [2024-02-13 08:07:10.746587] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@21 -- # val= 00:06:37.446 08:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # IFS=: 00:06:37.446 08:07:10 -- accel/accel.sh@20 -- # read -r var val 00:06:37.446 08:07:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.446 08:07:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.446 08:07:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.446 00:06:37.446 real 0m2.713s 00:06:37.446 user 0m2.490s 00:06:37.446 sys 0m0.231s 00:06:37.446 08:07:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.446 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.446 ************************************ 00:06:37.446 END TEST accel_decomp_mthread 00:06:37.446 ************************************ 00:06:37.446 08:07:10 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.446 08:07:10 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:37.446 08:07:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:37.446 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:37.446 ************************************ 00:06:37.446 START TEST accel_deomp_full_mthread 00:06:37.446 ************************************ 00:06:37.446 08:07:10 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.446 08:07:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.446 08:07:10 -- accel/accel.sh@17 -- # local accel_module 00:06:37.446 08:07:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.446 08:07:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.446 08:07:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.446 08:07:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.446 08:07:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.446 08:07:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.446 08:07:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.446 08:07:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.446 08:07:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.446 08:07:10 -- accel/accel.sh@42 -- # jq -r . 00:06:37.446 [2024-02-13 08:07:10.966050] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:37.446 [2024-02-13 08:07:10.966111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094341 ] 00:06:37.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.446 [2024-02-13 08:07:11.024963] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.446 [2024-02-13 08:07:11.092222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.446 [2024-02-13 08:07:11.092277] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:38.825 [2024-02-13 08:07:12.165447] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:38.825 08:07:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:38.825 00:06:38.825 SPDK Configuration: 00:06:38.825 Core mask: 0x1 00:06:38.825 00:06:38.825 Accel Perf Configuration: 00:06:38.825 Workload Type: decompress 00:06:38.825 Transfer size: 111250 bytes 00:06:38.825 Vector count 1 00:06:38.825 Module: software 00:06:38.825 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.825 Queue depth: 32 00:06:38.825 Allocate depth: 32 00:06:38.825 # threads/core: 2 00:06:38.825 Run time: 1 seconds 00:06:38.825 Verify: Yes 00:06:38.825 00:06:38.825 Running for 1 seconds... 00:06:38.825 00:06:38.825 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.825 ------------------------------------------------------------------------------------ 00:06:38.825 0,1 2528/s 104 MiB/s 0 0 00:06:38.825 0,0 2464/s 101 MiB/s 0 0 00:06:38.825 ==================================================================================== 00:06:38.825 Total 4992/s 529 MiB/s 0 0' 00:06:38.825 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:38.825 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:38.825 08:07:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.825 08:07:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.825 08:07:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.825 08:07:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.825 08:07:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.825 08:07:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.825 08:07:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.825 08:07:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.825 08:07:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.825 08:07:12 -- accel/accel.sh@42 -- # jq -r . 00:06:38.825 [2024-02-13 08:07:12.342485] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:38.825 [2024-02-13 08:07:12.342553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094572 ] 00:06:38.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.826 [2024-02-13 08:07:12.404541] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.826 [2024-02-13 08:07:12.470638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.826 [2024-02-13 08:07:12.470715] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=0x1 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=decompress 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=software 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=32 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=32 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=2 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val=Yes 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:39.085 08:07:12 -- accel/accel.sh@21 -- # val= 00:06:39.085 08:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # IFS=: 00:06:39.085 08:07:12 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 [2024-02-13 08:07:13.545304] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@21 -- # val= 00:06:40.024 08:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:40.024 08:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:40.024 08:07:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.024 08:07:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:40.024 08:07:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.024 00:06:40.024 real 0m2.763s 00:06:40.024 user 0m2.550s 00:06:40.024 sys 0m0.221s 00:06:40.024 08:07:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.024 08:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.024 ************************************ 00:06:40.024 END TEST accel_deomp_full_mthread 00:06:40.024 ************************************ 00:06:40.284 08:07:13 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:40.284 08:07:13 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.284 08:07:13 -- accel/accel.sh@129 -- # build_accel_config 00:06:40.284 08:07:13 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:06:40.284 08:07:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:40.284 08:07:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.284 08:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.284 08:07:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.284 08:07:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.284 08:07:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.284 08:07:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.284 08:07:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.284 08:07:13 -- accel/accel.sh@42 -- # jq -r . 00:06:40.284 ************************************ 00:06:40.284 START TEST accel_dif_functional_tests 00:06:40.284 ************************************ 00:06:40.284 08:07:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.284 [2024-02-13 08:07:13.781677] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:40.284 [2024-02-13 08:07:13.781725] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094823 ] 00:06:40.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.284 [2024-02-13 08:07:13.841115] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.284 [2024-02-13 08:07:13.907993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.284 [2024-02-13 08:07:13.908093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.284 [2024-02-13 08:07:13.908096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.284 [2024-02-13 08:07:13.908175] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:40.544 00:06:40.544 00:06:40.544 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.544 http://cunit.sourceforge.net/ 00:06:40.544 00:06:40.544 00:06:40.544 Suite: accel_dif 00:06:40.544 Test: verify: DIF generated, GUARD check ...passed 00:06:40.544 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.544 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.544 Test: verify: DIF not generated, GUARD check ...[2024-02-13 08:07:13.975879] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.544 [2024-02-13 08:07:13.975919] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.544 passed 00:06:40.544 Test: verify: DIF not generated, APPTAG check ...[2024-02-13 08:07:13.975951] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.544 [2024-02-13 08:07:13.975965] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.544 passed 00:06:40.544 Test: verify: DIF not generated, REFTAG check ...[2024-02-13 08:07:13.975983] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.544 [2024-02-13 08:07:13.975996] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.544 passed 00:06:40.544 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.544 Test: verify: APPTAG incorrect, APPTAG check ...[2024-02-13 08:07:13.976036] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.544 passed 00:06:40.544 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.544 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.544 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.544 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-02-13 08:07:13.976132] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.544 passed 00:06:40.544 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.544 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.544 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.544 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.544 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.544 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.544 Test: generate copy: iovecs-len validate ...[2024-02-13 08:07:13.976283] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.544 passed 00:06:40.544 Test: generate copy: buffer alignment validate ...passed 00:06:40.544 00:06:40.544 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.544 suites 1 1 n/a 0 0 00:06:40.544 tests 20 20 20 0 0 00:06:40.544 asserts 204 204 204 0 n/a 00:06:40.544 00:06:40.544 Elapsed time = 0.000 seconds 00:06:40.544 [2024-02-13 08:07:13.976444] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:40.544 00:06:40.544 real 0m0.427s 00:06:40.544 user 0m0.638s 00:06:40.544 sys 0m0.143s 00:06:40.544 08:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.544 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ************************************ 00:06:40.544 END TEST accel_dif_functional_tests 00:06:40.544 ************************************ 00:06:40.544 00:06:40.544 real 0m57.826s 00:06:40.544 user 1m6.327s 00:06:40.544 sys 0m6.162s 00:06:40.544 08:07:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.544 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ************************************ 00:06:40.544 END TEST accel 00:06:40.544 ************************************ 00:06:40.804 08:07:14 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.804 08:07:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:40.804 08:07:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:40.804 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.804 ************************************ 00:06:40.804 START TEST accel_rpc 00:06:40.804 ************************************ 00:06:40.804 08:07:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.804 * Looking for test storage... 00:06:40.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.804 08:07:14 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.804 08:07:14 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2095100 00:06:40.804 08:07:14 -- accel/accel_rpc.sh@15 -- # waitforlisten 2095100 00:06:40.804 08:07:14 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.804 08:07:14 -- common/autotest_common.sh@817 -- # '[' -z 2095100 ']' 00:06:40.804 08:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.804 08:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.804 08:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.804 08:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.804 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:40.804 [2024-02-13 08:07:14.371282] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:40.804 [2024-02-13 08:07:14.371329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095100 ] 00:06:40.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.804 [2024-02-13 08:07:14.427831] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.064 [2024-02-13 08:07:14.497617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.064 [2024-02-13 08:07:14.497736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.634 08:07:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.634 08:07:15 -- common/autotest_common.sh@850 -- # return 0 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.634 08:07:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:41.634 08:07:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:41.634 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.634 ************************************ 00:06:41.634 START TEST accel_assign_opcode 00:06:41.634 ************************************ 00:06:41.634 08:07:15 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.634 08:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.634 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.634 [2024-02-13 08:07:15.167706] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.634 08:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.634 08:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.634 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.634 [2024-02-13 08:07:15.175697] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.634 08:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.634 08:07:15 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.634 08:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.634 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.894 08:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.894 08:07:15 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.894 08:07:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.894 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.894 08:07:15 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.894 08:07:15 -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.894 08:07:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.894 software 00:06:41.894 00:06:41.894 real 0m0.229s 00:06:41.894 user 0m0.038s 00:06:41.894 sys 0m0.009s 00:06:41.894 08:07:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.894 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:41.894 ************************************ 00:06:41.894 END TEST accel_assign_opcode 00:06:41.894 ************************************ 00:06:41.894 08:07:15 -- accel/accel_rpc.sh@55 -- # killprocess 2095100 00:06:41.894 08:07:15 -- common/autotest_common.sh@924 -- # '[' -z 2095100 ']' 00:06:41.894 08:07:15 -- common/autotest_common.sh@928 -- # kill -0 2095100 00:06:41.894 08:07:15 -- common/autotest_common.sh@929 -- # uname 00:06:41.894 08:07:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:41.894 08:07:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2095100 00:06:41.894 08:07:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:41.894 08:07:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:41.894 08:07:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2095100' 00:06:41.894 killing process with pid 2095100 00:06:41.894 08:07:15 -- common/autotest_common.sh@943 -- # kill 2095100 00:06:41.894 08:07:15 -- common/autotest_common.sh@948 -- # wait 2095100 00:06:42.154 00:06:42.154 real 0m1.547s 00:06:42.154 user 0m1.616s 00:06:42.154 sys 0m0.367s 00:06:42.154 08:07:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.154 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.154 ************************************ 00:06:42.154 END TEST accel_rpc 00:06:42.154 ************************************ 00:06:42.154 08:07:15 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.154 08:07:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:42.154 08:07:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.154 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.154 ************************************ 00:06:42.154 START TEST app_cmdline 00:06:42.154 ************************************ 00:06:42.154 08:07:15 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.413 * Looking for test storage... 00:06:42.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.413 08:07:15 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.413 08:07:15 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2095405 00:06:42.413 08:07:15 -- app/cmdline.sh@18 -- # waitforlisten 2095405 00:06:42.413 08:07:15 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.413 08:07:15 -- common/autotest_common.sh@817 -- # '[' -z 2095405 ']' 00:06:42.413 08:07:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.413 08:07:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.413 08:07:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.413 08:07:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.413 08:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:42.413 [2024-02-13 08:07:15.958777] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:06:42.413 [2024-02-13 08:07:15.958827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095405 ] 00:06:42.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.413 [2024-02-13 08:07:16.018514] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.413 [2024-02-13 08:07:16.087342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.413 [2024-02-13 08:07:16.087460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.348 08:07:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.348 08:07:16 -- common/autotest_common.sh@850 -- # return 0 00:06:43.348 08:07:16 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.348 { 00:06:43.348 "version": "SPDK v24.05-pre git sha1 3bec6cb23", 00:06:43.348 "fields": { 00:06:43.348 "major": 24, 00:06:43.348 "minor": 5, 00:06:43.348 "patch": 0, 00:06:43.348 "suffix": "-pre", 00:06:43.348 "commit": "3bec6cb23" 00:06:43.348 } 00:06:43.348 } 00:06:43.348 08:07:16 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.348 08:07:16 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.348 08:07:16 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.348 08:07:16 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.348 08:07:16 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.348 08:07:16 -- app/cmdline.sh@26 -- # sort 00:06:43.348 08:07:16 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.348 08:07:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:43.348 08:07:16 -- common/autotest_common.sh@10 -- # set +x 00:06:43.348 08:07:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:43.348 08:07:16 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.348 08:07:16 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.348 08:07:16 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.348 08:07:16 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.348 08:07:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.348 08:07:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.348 08:07:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.348 08:07:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.348 08:07:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.348 08:07:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.348 08:07:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.348 08:07:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.348 08:07:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.348 08:07:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.607 request: 00:06:43.607 { 00:06:43.607 "method": "env_dpdk_get_mem_stats", 00:06:43.607 "req_id": 1 00:06:43.607 } 00:06:43.607 Got JSON-RPC error response 00:06:43.607 response: 00:06:43.607 { 00:06:43.607 "code": -32601, 00:06:43.607 "message": "Method not found" 00:06:43.607 } 00:06:43.607 08:07:17 -- common/autotest_common.sh@641 -- # es=1 00:06:43.607 08:07:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.607 08:07:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.607 08:07:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.607 08:07:17 -- app/cmdline.sh@1 -- # killprocess 2095405 00:06:43.607 08:07:17 -- common/autotest_common.sh@924 -- # '[' -z 2095405 ']' 00:06:43.607 08:07:17 -- common/autotest_common.sh@928 -- # kill -0 2095405 00:06:43.607 08:07:17 -- common/autotest_common.sh@929 -- # uname 00:06:43.607 08:07:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:06:43.607 08:07:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2095405 00:06:43.607 08:07:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:06:43.607 08:07:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:06:43.607 08:07:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2095405' 00:06:43.607 killing process with pid 2095405 00:06:43.607 08:07:17 -- common/autotest_common.sh@943 -- # kill 2095405 00:06:43.607 08:07:17 -- common/autotest_common.sh@948 -- # wait 2095405 00:06:43.866 00:06:43.866 real 0m1.690s 00:06:43.866 user 0m2.032s 00:06:43.866 sys 0m0.408s 00:06:43.866 08:07:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.866 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:43.866 ************************************ 00:06:43.866 END TEST app_cmdline 00:06:43.866 ************************************ 00:06:44.126 08:07:17 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.126 08:07:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:44.126 08:07:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.126 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.126 ************************************ 00:06:44.126 START TEST version 00:06:44.126 ************************************ 00:06:44.126 08:07:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.126 * Looking for test storage... 00:06:44.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.126 08:07:17 -- app/version.sh@17 -- # get_header_version major 00:06:44.126 08:07:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.126 08:07:17 -- app/version.sh@14 -- # cut -f2 00:06:44.126 08:07:17 -- app/version.sh@14 -- # tr -d '"' 00:06:44.126 08:07:17 -- app/version.sh@17 -- # major=24 00:06:44.126 08:07:17 -- app/version.sh@18 -- # get_header_version minor 00:06:44.126 08:07:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.126 08:07:17 -- app/version.sh@14 -- # cut -f2 00:06:44.126 08:07:17 -- app/version.sh@14 -- # tr -d '"' 00:06:44.126 08:07:17 -- app/version.sh@18 -- # minor=5 00:06:44.126 08:07:17 -- app/version.sh@19 -- # get_header_version patch 00:06:44.126 08:07:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.126 08:07:17 -- app/version.sh@14 -- # cut -f2 00:06:44.126 08:07:17 -- app/version.sh@14 -- # tr -d '"' 00:06:44.126 08:07:17 -- app/version.sh@19 -- # patch=0 00:06:44.126 08:07:17 -- app/version.sh@20 -- # get_header_version suffix 00:06:44.126 08:07:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.126 08:07:17 -- app/version.sh@14 -- # cut -f2 00:06:44.126 08:07:17 -- app/version.sh@14 -- # tr -d '"' 00:06:44.126 08:07:17 -- app/version.sh@20 -- # suffix=-pre 00:06:44.126 08:07:17 -- app/version.sh@22 -- # version=24.5 00:06:44.126 08:07:17 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.126 08:07:17 -- app/version.sh@28 -- # version=24.5rc0 00:06:44.126 08:07:17 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.126 08:07:17 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.126 08:07:17 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:44.126 08:07:17 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:44.126 00:06:44.126 real 0m0.149s 00:06:44.126 user 0m0.087s 00:06:44.126 sys 0m0.098s 00:06:44.126 08:07:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.126 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.126 ************************************ 00:06:44.126 END TEST version 00:06:44.126 ************************************ 00:06:44.126 08:07:17 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@204 -- # uname -s 00:06:44.126 08:07:17 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:44.126 08:07:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:44.126 08:07:17 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:44.126 08:07:17 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:44.126 08:07:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:44.126 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.126 08:07:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:44.126 08:07:17 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:44.126 08:07:17 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.126 08:07:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:44.127 08:07:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.127 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.127 ************************************ 00:06:44.127 START TEST nvmf_tcp 00:06:44.127 ************************************ 00:06:44.127 08:07:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.387 * Looking for test storage... 00:06:44.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.387 08:07:17 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.387 08:07:17 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.387 08:07:17 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.387 08:07:17 -- nvmf/common.sh@7 -- # uname -s 00:06:44.387 08:07:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.387 08:07:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.388 08:07:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.388 08:07:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.388 08:07:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.388 08:07:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.388 08:07:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.388 08:07:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.388 08:07:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.388 08:07:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.388 08:07:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:44.388 08:07:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:44.388 08:07:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.388 08:07:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.388 08:07:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.388 08:07:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.388 08:07:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.388 08:07:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.388 08:07:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.388 08:07:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@5 -- # export PATH 00:06:44.388 08:07:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- nvmf/common.sh@46 -- # : 0 00:06:44.388 08:07:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.388 08:07:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.388 08:07:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.388 08:07:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.388 08:07:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.388 08:07:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.388 08:07:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.388 08:07:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.388 08:07:17 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.388 08:07:17 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.388 08:07:17 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.388 08:07:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.388 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.388 08:07:17 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.388 08:07:17 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.388 08:07:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:44.388 08:07:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:44.388 08:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:44.388 ************************************ 00:06:44.388 START TEST nvmf_example 00:06:44.388 ************************************ 00:06:44.388 08:07:17 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.388 * Looking for test storage... 00:06:44.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.388 08:07:17 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.388 08:07:17 -- nvmf/common.sh@7 -- # uname -s 00:06:44.388 08:07:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.388 08:07:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.388 08:07:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.388 08:07:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.388 08:07:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.388 08:07:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.388 08:07:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.388 08:07:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.388 08:07:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.388 08:07:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.388 08:07:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:44.388 08:07:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:44.388 08:07:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.388 08:07:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.388 08:07:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.388 08:07:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.388 08:07:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.388 08:07:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.388 08:07:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.388 08:07:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- paths/export.sh@5 -- # export PATH 00:06:44.388 08:07:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.388 08:07:17 -- nvmf/common.sh@46 -- # : 0 00:06:44.388 08:07:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:44.388 08:07:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:44.388 08:07:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:44.388 08:07:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.388 08:07:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.388 08:07:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:44.388 08:07:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:44.388 08:07:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:44.388 08:07:18 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:44.388 08:07:18 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:44.388 08:07:18 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:44.388 08:07:18 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:44.388 08:07:18 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:44.388 08:07:18 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:44.388 08:07:18 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:44.388 08:07:18 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:44.388 08:07:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.388 08:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:44.388 08:07:18 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:44.388 08:07:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:44.388 08:07:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.388 08:07:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:44.388 08:07:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:44.388 08:07:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:44.388 08:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.388 08:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.388 08:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.388 08:07:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:44.388 08:07:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:44.388 08:07:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:44.388 08:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:50.953 08:07:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:50.953 08:07:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:50.953 08:07:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:50.953 08:07:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:50.953 08:07:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:50.953 08:07:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:50.953 08:07:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:50.953 08:07:23 -- nvmf/common.sh@294 -- # net_devs=() 00:06:50.953 08:07:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:50.953 08:07:23 -- nvmf/common.sh@295 -- # e810=() 00:06:50.953 08:07:23 -- nvmf/common.sh@295 -- # local -ga e810 00:06:50.953 08:07:23 -- nvmf/common.sh@296 -- # x722=() 00:06:50.953 08:07:23 -- nvmf/common.sh@296 -- # local -ga x722 00:06:50.953 08:07:23 -- nvmf/common.sh@297 -- # mlx=() 00:06:50.953 08:07:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:50.953 08:07:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.953 08:07:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:50.953 08:07:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:50.953 08:07:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.953 08:07:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:50.953 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:50.953 08:07:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.953 08:07:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:50.953 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:50.953 08:07:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.953 08:07:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.953 08:07:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.953 08:07:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:50.953 Found net devices under 0000:af:00.0: cvl_0_0 00:06:50.953 08:07:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.953 08:07:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.953 08:07:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.953 08:07:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.953 08:07:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:50.953 Found net devices under 0000:af:00.1: cvl_0_1 00:06:50.953 08:07:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.953 08:07:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:50.953 08:07:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:50.953 08:07:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:50.953 08:07:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.953 08:07:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.953 08:07:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.953 08:07:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:50.953 08:07:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.953 08:07:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.953 08:07:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:50.953 08:07:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.953 08:07:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.953 08:07:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:50.953 08:07:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:50.953 08:07:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.953 08:07:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.954 08:07:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.954 08:07:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.954 08:07:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:50.954 08:07:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.954 08:07:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.954 08:07:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.954 08:07:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:50.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:06:50.954 00:06:50.954 --- 10.0.0.2 ping statistics --- 00:06:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.954 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:06:50.954 08:07:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:06:50.954 00:06:50.954 --- 10.0.0.1 ping statistics --- 00:06:50.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.954 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:06:50.954 08:07:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.954 08:07:24 -- nvmf/common.sh@410 -- # return 0 00:06:50.954 08:07:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:50.954 08:07:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.954 08:07:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:50.954 08:07:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:50.954 08:07:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.954 08:07:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:50.954 08:07:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:50.954 08:07:24 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:50.954 08:07:24 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:50.954 08:07:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:50.954 08:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 08:07:24 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:50.954 08:07:24 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:50.954 08:07:24 -- target/nvmf_example.sh@34 -- # nvmfpid=2099293 00:06:50.954 08:07:24 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:50.954 08:07:24 -- target/nvmf_example.sh@36 -- # waitforlisten 2099293 00:06:50.954 08:07:24 -- common/autotest_common.sh@817 -- # '[' -z 2099293 ']' 00:06:50.954 08:07:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.954 08:07:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.954 08:07:24 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:50.954 08:07:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.954 08:07:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.954 08:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.522 08:07:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.522 08:07:24 -- common/autotest_common.sh@850 -- # return 0 00:06:51.522 08:07:24 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:51.522 08:07:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:51.522 08:07:24 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.522 08:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.522 08:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.522 08:07:25 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:51.522 08:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.522 08:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.522 08:07:25 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:51.522 08:07:25 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.522 08:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.522 08:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.522 08:07:25 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:51.522 08:07:25 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:51.522 08:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.522 08:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.522 08:07:25 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.522 08:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.522 08:07:25 -- common/autotest_common.sh@10 -- # set +x 00:06:51.522 08:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.522 08:07:25 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:51.522 08:07:25 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:51.522 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.571 Initializing NVMe Controllers 00:07:01.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:01.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:01.571 Initialization complete. Launching workers. 00:07:01.571 ======================================================== 00:07:01.571 Latency(us) 00:07:01.571 Device Information : IOPS MiB/s Average min max 00:07:01.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14882.18 58.13 4299.96 691.19 15534.78 00:07:01.571 ======================================================== 00:07:01.571 Total : 14882.18 58.13 4299.96 691.19 15534.78 00:07:01.571 00:07:01.571 08:07:35 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:01.571 08:07:35 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:01.571 08:07:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:01.571 08:07:35 -- nvmf/common.sh@116 -- # sync 00:07:01.571 08:07:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:01.571 08:07:35 -- nvmf/common.sh@119 -- # set +e 00:07:01.571 08:07:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:01.571 08:07:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:01.830 rmmod nvme_tcp 00:07:01.830 rmmod nvme_fabrics 00:07:01.830 rmmod nvme_keyring 00:07:01.830 08:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:01.830 08:07:35 -- nvmf/common.sh@123 -- # set -e 00:07:01.830 08:07:35 -- nvmf/common.sh@124 -- # return 0 00:07:01.830 08:07:35 -- nvmf/common.sh@477 -- # '[' -n 2099293 ']' 00:07:01.830 08:07:35 -- nvmf/common.sh@478 -- # killprocess 2099293 00:07:01.830 08:07:35 -- common/autotest_common.sh@924 -- # '[' -z 2099293 ']' 00:07:01.830 08:07:35 -- common/autotest_common.sh@928 -- # kill -0 2099293 00:07:01.830 08:07:35 -- common/autotest_common.sh@929 -- # uname 00:07:01.830 08:07:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:01.830 08:07:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2099293 00:07:01.830 08:07:35 -- common/autotest_common.sh@930 -- # process_name=nvmf 00:07:01.830 08:07:35 -- common/autotest_common.sh@934 -- # '[' nvmf = sudo ']' 00:07:01.830 08:07:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2099293' 00:07:01.830 killing process with pid 2099293 00:07:01.830 08:07:35 -- common/autotest_common.sh@943 -- # kill 2099293 00:07:01.830 08:07:35 -- common/autotest_common.sh@948 -- # wait 2099293 00:07:02.089 nvmf threads initialize successfully 00:07:02.089 bdev subsystem init successfully 00:07:02.089 created a nvmf target service 00:07:02.089 create targets's poll groups done 00:07:02.089 all subsystems of target started 00:07:02.089 nvmf target is running 00:07:02.089 all subsystems of target stopped 00:07:02.089 destroy targets's poll groups done 00:07:02.089 destroyed the nvmf target service 00:07:02.089 bdev subsystem finish successfully 00:07:02.089 nvmf threads destroy successfully 00:07:02.089 08:07:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:02.089 08:07:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:02.089 08:07:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:02.089 08:07:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.089 08:07:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:02.089 08:07:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.089 08:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.089 08:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.996 08:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:03.996 08:07:37 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:03.996 08:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:03.996 08:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 00:07:03.997 real 0m19.727s 00:07:03.997 user 0m45.810s 00:07:03.997 sys 0m5.873s 00:07:03.997 08:07:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.997 08:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 ************************************ 00:07:03.997 END TEST nvmf_example 00:07:03.997 ************************************ 00:07:03.997 08:07:37 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.997 08:07:37 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:03.997 08:07:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:03.997 08:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 ************************************ 00:07:03.997 START TEST nvmf_filesystem 00:07:03.997 ************************************ 00:07:03.997 08:07:37 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:04.259 * Looking for test storage... 00:07:04.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.259 08:07:37 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:04.259 08:07:37 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:04.259 08:07:37 -- common/autotest_common.sh@34 -- # set -e 00:07:04.259 08:07:37 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:04.259 08:07:37 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:04.259 08:07:37 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:04.259 08:07:37 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:04.259 08:07:37 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:04.259 08:07:37 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:04.259 08:07:37 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:04.259 08:07:37 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:04.259 08:07:37 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:04.259 08:07:37 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:04.259 08:07:37 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:04.259 08:07:37 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:04.259 08:07:37 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:04.259 08:07:37 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:04.259 08:07:37 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:04.259 08:07:37 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:04.259 08:07:37 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:04.259 08:07:37 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:04.259 08:07:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:04.259 08:07:37 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:04.259 08:07:37 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:04.259 08:07:37 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:04.259 08:07:37 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:04.259 08:07:37 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:04.259 08:07:37 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:04.259 08:07:37 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:04.259 08:07:37 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:04.259 08:07:37 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:04.259 08:07:37 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:04.259 08:07:37 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:04.259 08:07:37 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:04.259 08:07:37 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:04.259 08:07:37 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:04.259 08:07:37 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:04.259 08:07:37 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:04.259 08:07:37 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:04.259 08:07:37 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:04.259 08:07:37 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:04.259 08:07:37 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:04.259 08:07:37 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:04.259 08:07:37 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:04.259 08:07:37 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:04.260 08:07:37 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:04.260 08:07:37 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:04.260 08:07:37 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:04.260 08:07:37 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:04.260 08:07:37 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:04.260 08:07:37 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:04.260 08:07:37 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:04.260 08:07:37 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:04.260 08:07:37 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:04.260 08:07:37 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:04.260 08:07:37 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:04.260 08:07:37 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:04.260 08:07:37 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:04.260 08:07:37 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:04.260 08:07:37 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:04.260 08:07:37 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:04.260 08:07:37 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:04.260 08:07:37 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:04.260 08:07:37 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:04.260 08:07:37 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:04.260 08:07:37 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:04.260 08:07:37 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:07:04.260 08:07:37 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:04.260 08:07:37 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:04.260 08:07:37 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:04.260 08:07:37 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:04.260 08:07:37 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:04.260 08:07:37 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:04.260 08:07:37 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:04.260 08:07:37 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:04.260 08:07:37 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:04.260 08:07:37 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:04.260 08:07:37 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:04.260 08:07:37 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:04.260 08:07:37 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:04.260 08:07:37 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:04.260 08:07:37 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:04.260 08:07:37 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:04.260 08:07:37 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:04.260 08:07:37 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:04.260 08:07:37 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:04.260 08:07:37 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:04.260 08:07:37 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:04.260 08:07:37 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:04.260 08:07:37 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:04.260 08:07:37 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:04.260 08:07:37 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.260 08:07:37 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:04.260 08:07:37 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.260 08:07:37 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:04.260 08:07:37 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:04.260 08:07:37 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:04.260 08:07:37 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:04.260 08:07:37 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:04.260 08:07:37 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:04.260 08:07:37 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:04.260 08:07:37 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:04.260 #define SPDK_CONFIG_H 00:07:04.260 #define SPDK_CONFIG_APPS 1 00:07:04.260 #define SPDK_CONFIG_ARCH native 00:07:04.260 #undef SPDK_CONFIG_ASAN 00:07:04.260 #undef SPDK_CONFIG_AVAHI 00:07:04.260 #undef SPDK_CONFIG_CET 00:07:04.260 #define SPDK_CONFIG_COVERAGE 1 00:07:04.260 #define SPDK_CONFIG_CROSS_PREFIX 00:07:04.260 #undef SPDK_CONFIG_CRYPTO 00:07:04.260 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:04.260 #undef SPDK_CONFIG_CUSTOMOCF 00:07:04.260 #undef SPDK_CONFIG_DAOS 00:07:04.260 #define SPDK_CONFIG_DAOS_DIR 00:07:04.260 #define SPDK_CONFIG_DEBUG 1 00:07:04.260 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:04.260 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:04.260 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:04.260 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:04.260 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:04.260 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:04.260 #define SPDK_CONFIG_EXAMPLES 1 00:07:04.260 #undef SPDK_CONFIG_FC 00:07:04.260 #define SPDK_CONFIG_FC_PATH 00:07:04.260 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:04.260 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:04.260 #undef SPDK_CONFIG_FUSE 00:07:04.260 #undef SPDK_CONFIG_FUZZER 00:07:04.260 #define SPDK_CONFIG_FUZZER_LIB 00:07:04.260 #undef SPDK_CONFIG_GOLANG 00:07:04.260 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:04.260 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:04.260 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:04.260 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:04.260 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:04.260 #define SPDK_CONFIG_IDXD 1 00:07:04.260 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:04.260 #undef SPDK_CONFIG_IPSEC_MB 00:07:04.260 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:04.260 #define SPDK_CONFIG_ISAL 1 00:07:04.260 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:04.260 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:04.260 #define SPDK_CONFIG_LIBDIR 00:07:04.260 #undef SPDK_CONFIG_LTO 00:07:04.260 #define SPDK_CONFIG_MAX_LCORES 00:07:04.260 #define SPDK_CONFIG_NVME_CUSE 1 00:07:04.260 #undef SPDK_CONFIG_OCF 00:07:04.260 #define SPDK_CONFIG_OCF_PATH 00:07:04.260 #define SPDK_CONFIG_OPENSSL_PATH 00:07:04.260 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:04.260 #undef SPDK_CONFIG_PGO_USE 00:07:04.260 #define SPDK_CONFIG_PREFIX /usr/local 00:07:04.260 #undef SPDK_CONFIG_RAID5F 00:07:04.260 #undef SPDK_CONFIG_RBD 00:07:04.260 #define SPDK_CONFIG_RDMA 1 00:07:04.260 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:04.260 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:04.260 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:04.260 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:04.260 #define SPDK_CONFIG_SHARED 1 00:07:04.260 #undef SPDK_CONFIG_SMA 00:07:04.260 #define SPDK_CONFIG_TESTS 1 00:07:04.260 #undef SPDK_CONFIG_TSAN 00:07:04.260 #define SPDK_CONFIG_UBLK 1 00:07:04.260 #define SPDK_CONFIG_UBSAN 1 00:07:04.260 #undef SPDK_CONFIG_UNIT_TESTS 00:07:04.260 #undef SPDK_CONFIG_URING 00:07:04.260 #define SPDK_CONFIG_URING_PATH 00:07:04.260 #undef SPDK_CONFIG_URING_ZNS 00:07:04.260 #undef SPDK_CONFIG_USDT 00:07:04.260 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:04.260 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:04.260 #undef SPDK_CONFIG_VFIO_USER 00:07:04.260 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:04.260 #define SPDK_CONFIG_VHOST 1 00:07:04.260 #define SPDK_CONFIG_VIRTIO 1 00:07:04.260 #undef SPDK_CONFIG_VTUNE 00:07:04.260 #define SPDK_CONFIG_VTUNE_DIR 00:07:04.260 #define SPDK_CONFIG_WERROR 1 00:07:04.260 #define SPDK_CONFIG_WPDK_DIR 00:07:04.260 #undef SPDK_CONFIG_XNVME 00:07:04.260 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:04.260 08:07:37 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:04.260 08:07:37 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.260 08:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.260 08:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.260 08:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.260 08:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.260 08:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.260 08:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.260 08:07:37 -- paths/export.sh@5 -- # export PATH 00:07:04.260 08:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.260 08:07:37 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.260 08:07:37 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.260 08:07:37 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:04.260 08:07:37 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:04.261 08:07:37 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:04.261 08:07:37 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:04.261 08:07:37 -- pm/common@16 -- # TEST_TAG=N/A 00:07:04.261 08:07:37 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:04.261 08:07:37 -- common/autotest_common.sh@52 -- # : 1 00:07:04.261 08:07:37 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:04.261 08:07:37 -- common/autotest_common.sh@56 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:04.261 08:07:37 -- common/autotest_common.sh@58 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:04.261 08:07:37 -- common/autotest_common.sh@60 -- # : 1 00:07:04.261 08:07:37 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:04.261 08:07:37 -- common/autotest_common.sh@62 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:04.261 08:07:37 -- common/autotest_common.sh@64 -- # : 00:07:04.261 08:07:37 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:04.261 08:07:37 -- common/autotest_common.sh@66 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:04.261 08:07:37 -- common/autotest_common.sh@68 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:04.261 08:07:37 -- common/autotest_common.sh@70 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:04.261 08:07:37 -- common/autotest_common.sh@72 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:04.261 08:07:37 -- common/autotest_common.sh@74 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:04.261 08:07:37 -- common/autotest_common.sh@76 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:04.261 08:07:37 -- common/autotest_common.sh@78 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:04.261 08:07:37 -- common/autotest_common.sh@80 -- # : 1 00:07:04.261 08:07:37 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:04.261 08:07:37 -- common/autotest_common.sh@82 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:04.261 08:07:37 -- common/autotest_common.sh@84 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:04.261 08:07:37 -- common/autotest_common.sh@86 -- # : 1 00:07:04.261 08:07:37 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:04.261 08:07:37 -- common/autotest_common.sh@88 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:04.261 08:07:37 -- common/autotest_common.sh@90 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:04.261 08:07:37 -- common/autotest_common.sh@92 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:04.261 08:07:37 -- common/autotest_common.sh@94 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:04.261 08:07:37 -- common/autotest_common.sh@96 -- # : tcp 00:07:04.261 08:07:37 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:04.261 08:07:37 -- common/autotest_common.sh@98 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:04.261 08:07:37 -- common/autotest_common.sh@100 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:04.261 08:07:37 -- common/autotest_common.sh@102 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:04.261 08:07:37 -- common/autotest_common.sh@104 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:04.261 08:07:37 -- common/autotest_common.sh@106 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:04.261 08:07:37 -- common/autotest_common.sh@108 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:04.261 08:07:37 -- common/autotest_common.sh@110 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:04.261 08:07:37 -- common/autotest_common.sh@112 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:04.261 08:07:37 -- common/autotest_common.sh@114 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:04.261 08:07:37 -- common/autotest_common.sh@116 -- # : 1 00:07:04.261 08:07:37 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:04.261 08:07:37 -- common/autotest_common.sh@118 -- # : 00:07:04.261 08:07:37 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:04.261 08:07:37 -- common/autotest_common.sh@120 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:04.261 08:07:37 -- common/autotest_common.sh@122 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:04.261 08:07:37 -- common/autotest_common.sh@124 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:04.261 08:07:37 -- common/autotest_common.sh@126 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:04.261 08:07:37 -- common/autotest_common.sh@128 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:04.261 08:07:37 -- common/autotest_common.sh@130 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:04.261 08:07:37 -- common/autotest_common.sh@132 -- # : 00:07:04.261 08:07:37 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:04.261 08:07:37 -- common/autotest_common.sh@134 -- # : true 00:07:04.261 08:07:37 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:04.261 08:07:37 -- common/autotest_common.sh@136 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:04.261 08:07:37 -- common/autotest_common.sh@138 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:04.261 08:07:37 -- common/autotest_common.sh@140 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:04.261 08:07:37 -- common/autotest_common.sh@142 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:04.261 08:07:37 -- common/autotest_common.sh@144 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:04.261 08:07:37 -- common/autotest_common.sh@146 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:04.261 08:07:37 -- common/autotest_common.sh@148 -- # : e810 00:07:04.261 08:07:37 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:04.261 08:07:37 -- common/autotest_common.sh@150 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:04.261 08:07:37 -- common/autotest_common.sh@152 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:04.261 08:07:37 -- common/autotest_common.sh@154 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:04.261 08:07:37 -- common/autotest_common.sh@156 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:04.261 08:07:37 -- common/autotest_common.sh@158 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:04.261 08:07:37 -- common/autotest_common.sh@161 -- # : 00:07:04.261 08:07:37 -- common/autotest_common.sh@162 -- # export SPDK_TEST_FUZZER_TARGET 00:07:04.261 08:07:37 -- common/autotest_common.sh@163 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@164 -- # export SPDK_TEST_NVMF_MDNS 00:07:04.261 08:07:37 -- common/autotest_common.sh@165 -- # : 0 00:07:04.261 08:07:37 -- common/autotest_common.sh@166 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:04.261 08:07:37 -- common/autotest_common.sh@169 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@169 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@170 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@170 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@171 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@171 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@172 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@172 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.261 08:07:37 -- common/autotest_common.sh@175 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.261 08:07:37 -- common/autotest_common.sh@175 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.261 08:07:37 -- common/autotest_common.sh@179 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.261 08:07:37 -- common/autotest_common.sh@179 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:04.262 08:07:37 -- common/autotest_common.sh@183 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:04.262 08:07:37 -- common/autotest_common.sh@183 -- # PYTHONDONTWRITEBYTECODE=1 00:07:04.262 08:07:37 -- common/autotest_common.sh@187 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.262 08:07:37 -- common/autotest_common.sh@187 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.262 08:07:37 -- common/autotest_common.sh@188 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.262 08:07:37 -- common/autotest_common.sh@188 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.262 08:07:37 -- common/autotest_common.sh@192 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:04.262 08:07:37 -- common/autotest_common.sh@193 -- # rm -rf /var/tmp/asan_suppression_file 00:07:04.262 08:07:37 -- common/autotest_common.sh@194 -- # cat 00:07:04.262 08:07:37 -- common/autotest_common.sh@220 -- # echo leak:libfuse3.so 00:07:04.262 08:07:37 -- common/autotest_common.sh@222 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.262 08:07:37 -- common/autotest_common.sh@222 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.262 08:07:37 -- common/autotest_common.sh@224 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.262 08:07:37 -- common/autotest_common.sh@224 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.262 08:07:37 -- common/autotest_common.sh@226 -- # '[' -z /var/spdk/dependencies ']' 00:07:04.262 08:07:37 -- common/autotest_common.sh@229 -- # export DEPENDENCY_DIR 00:07:04.262 08:07:37 -- common/autotest_common.sh@233 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.262 08:07:37 -- common/autotest_common.sh@233 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:04.262 08:07:37 -- common/autotest_common.sh@234 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.262 08:07:37 -- common/autotest_common.sh@234 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:04.262 08:07:37 -- common/autotest_common.sh@237 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.262 08:07:37 -- common/autotest_common.sh@237 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.262 08:07:37 -- common/autotest_common.sh@238 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.262 08:07:37 -- common/autotest_common.sh@238 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.262 08:07:37 -- common/autotest_common.sh@240 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.262 08:07:37 -- common/autotest_common.sh@240 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.262 08:07:37 -- common/autotest_common.sh@243 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.262 08:07:37 -- common/autotest_common.sh@243 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.262 08:07:37 -- common/autotest_common.sh@246 -- # '[' 0 -eq 0 ']' 00:07:04.262 08:07:37 -- common/autotest_common.sh@247 -- # export valgrind= 00:07:04.262 08:07:37 -- common/autotest_common.sh@247 -- # valgrind= 00:07:04.262 08:07:37 -- common/autotest_common.sh@253 -- # uname -s 00:07:04.262 08:07:37 -- common/autotest_common.sh@253 -- # '[' Linux = Linux ']' 00:07:04.262 08:07:37 -- common/autotest_common.sh@254 -- # HUGEMEM=4096 00:07:04.262 08:07:37 -- common/autotest_common.sh@255 -- # export CLEAR_HUGE=yes 00:07:04.262 08:07:37 -- common/autotest_common.sh@255 -- # CLEAR_HUGE=yes 00:07:04.262 08:07:37 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@263 -- # MAKE=make 00:07:04.262 08:07:37 -- common/autotest_common.sh@264 -- # MAKEFLAGS=-j96 00:07:04.262 08:07:37 -- common/autotest_common.sh@280 -- # export HUGEMEM=4096 00:07:04.262 08:07:37 -- common/autotest_common.sh@280 -- # HUGEMEM=4096 00:07:04.262 08:07:37 -- common/autotest_common.sh@282 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:04.262 08:07:37 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:04.262 08:07:37 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:04.262 08:07:37 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:04.262 08:07:37 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:04.262 08:07:37 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:04.262 08:07:37 -- common/autotest_common.sh@307 -- # [[ -z 2101711 ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@307 -- # kill -0 2101711 00:07:04.262 08:07:37 -- common/autotest_common.sh@1663 -- # set_test_storage 2147483648 00:07:04.262 08:07:37 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:04.262 08:07:37 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:04.262 08:07:37 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:04.262 08:07:37 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:04.262 08:07:37 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:04.262 08:07:37 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:04.262 08:07:37 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.0WHWnE 00:07:04.262 08:07:37 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:04.262 08:07:37 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0WHWnE/tests/target /tmp/spdk.0WHWnE 00:07:04.262 08:07:37 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@316 -- # df -T 00:07:04.262 08:07:37 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=931024896 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=4353404928 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=86358687744 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=95562752000 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=9204064256 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=47780118528 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47781376000 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=1257472 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=19102998528 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19112550400 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=9551872 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=47780810752 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47781376000 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=565248 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=9556271104 00:07:04.262 08:07:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9556275200 00:07:04.262 08:07:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:04.262 08:07:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:04.262 08:07:37 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:04.262 * Looking for test storage... 00:07:04.262 08:07:37 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:04.262 08:07:37 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:04.262 08:07:37 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.262 08:07:37 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:04.262 08:07:37 -- common/autotest_common.sh@361 -- # mount=/ 00:07:04.262 08:07:37 -- common/autotest_common.sh@363 -- # target_space=86358687744 00:07:04.262 08:07:37 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:04.262 08:07:37 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:04.262 08:07:37 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:04.262 08:07:37 -- common/autotest_common.sh@370 -- # new_size=11418656768 00:07:04.262 08:07:37 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:04.262 08:07:37 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.262 08:07:37 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.262 08:07:37 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.262 08:07:37 -- common/autotest_common.sh@378 -- # return 0 00:07:04.262 08:07:37 -- common/autotest_common.sh@1665 -- # set -o errtrace 00:07:04.263 08:07:37 -- common/autotest_common.sh@1666 -- # shopt -s extdebug 00:07:04.263 08:07:37 -- common/autotest_common.sh@1667 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:04.263 08:07:37 -- common/autotest_common.sh@1669 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:04.263 08:07:37 -- common/autotest_common.sh@1670 -- # true 00:07:04.263 08:07:37 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:07:04.263 08:07:37 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:04.263 08:07:37 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:04.263 08:07:37 -- common/autotest_common.sh@27 -- # exec 00:07:04.263 08:07:37 -- common/autotest_common.sh@29 -- # exec 00:07:04.263 08:07:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:04.263 08:07:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:04.263 08:07:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:04.263 08:07:37 -- common/autotest_common.sh@18 -- # set -x 00:07:04.263 08:07:37 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.263 08:07:37 -- nvmf/common.sh@7 -- # uname -s 00:07:04.263 08:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.263 08:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.263 08:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.263 08:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.263 08:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.263 08:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.263 08:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.263 08:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.263 08:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.263 08:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.263 08:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:04.263 08:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:04.263 08:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.263 08:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.263 08:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.263 08:07:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.263 08:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.263 08:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.263 08:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.263 08:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.263 08:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.263 08:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.263 08:07:37 -- paths/export.sh@5 -- # export PATH 00:07:04.263 08:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.263 08:07:37 -- nvmf/common.sh@46 -- # : 0 00:07:04.263 08:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:04.263 08:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:04.263 08:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:04.263 08:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.263 08:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.263 08:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:04.263 08:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:04.263 08:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:04.263 08:07:37 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:04.263 08:07:37 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:04.263 08:07:37 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:04.263 08:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:04.263 08:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.263 08:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:04.263 08:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:04.263 08:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:04.263 08:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.263 08:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.263 08:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.263 08:07:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:04.263 08:07:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:04.263 08:07:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:04.263 08:07:37 -- common/autotest_common.sh@10 -- # set +x 00:07:10.838 08:07:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:10.838 08:07:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:10.838 08:07:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:10.838 08:07:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:10.838 08:07:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:10.838 08:07:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:10.838 08:07:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:10.838 08:07:43 -- nvmf/common.sh@294 -- # net_devs=() 00:07:10.838 08:07:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:10.838 08:07:43 -- nvmf/common.sh@295 -- # e810=() 00:07:10.838 08:07:43 -- nvmf/common.sh@295 -- # local -ga e810 00:07:10.838 08:07:43 -- nvmf/common.sh@296 -- # x722=() 00:07:10.838 08:07:43 -- nvmf/common.sh@296 -- # local -ga x722 00:07:10.838 08:07:43 -- nvmf/common.sh@297 -- # mlx=() 00:07:10.838 08:07:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:10.838 08:07:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.838 08:07:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:10.838 08:07:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:10.838 08:07:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:10.838 08:07:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.838 08:07:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:10.838 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:10.838 08:07:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:10.838 08:07:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:10.838 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:10.838 08:07:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:10.838 08:07:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:10.838 08:07:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.838 08:07:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.838 08:07:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.838 08:07:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.838 08:07:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:10.838 Found net devices under 0000:af:00.0: cvl_0_0 00:07:10.838 08:07:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.838 08:07:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:10.838 08:07:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.838 08:07:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:10.838 08:07:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.839 08:07:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:10.839 Found net devices under 0000:af:00.1: cvl_0_1 00:07:10.839 08:07:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.839 08:07:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:10.839 08:07:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:10.839 08:07:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:10.839 08:07:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:10.839 08:07:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:10.839 08:07:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.839 08:07:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.839 08:07:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.839 08:07:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:10.839 08:07:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.839 08:07:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.839 08:07:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:10.839 08:07:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.839 08:07:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.839 08:07:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:10.839 08:07:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:10.839 08:07:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.839 08:07:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.839 08:07:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.839 08:07:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.839 08:07:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:10.839 08:07:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.839 08:07:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.839 08:07:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.839 08:07:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:10.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:07:10.839 00:07:10.839 --- 10.0.0.2 ping statistics --- 00:07:10.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.839 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:10.839 08:07:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:10.839 00:07:10.839 --- 10.0.0.1 ping statistics --- 00:07:10.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.839 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:10.839 08:07:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.839 08:07:44 -- nvmf/common.sh@410 -- # return 0 00:07:10.839 08:07:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:10.839 08:07:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.839 08:07:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:10.839 08:07:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:10.839 08:07:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.839 08:07:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:10.839 08:07:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:10.839 08:07:44 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:10.839 08:07:44 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:10.839 08:07:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:10.839 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.839 ************************************ 00:07:10.839 START TEST nvmf_filesystem_no_in_capsule 00:07:10.839 ************************************ 00:07:10.839 08:07:44 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 0 00:07:10.839 08:07:44 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:10.839 08:07:44 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:10.839 08:07:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:10.839 08:07:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:10.839 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.839 08:07:44 -- nvmf/common.sh@469 -- # nvmfpid=2105083 00:07:10.839 08:07:44 -- nvmf/common.sh@470 -- # waitforlisten 2105083 00:07:10.839 08:07:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.839 08:07:44 -- common/autotest_common.sh@817 -- # '[' -z 2105083 ']' 00:07:10.839 08:07:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.839 08:07:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.839 08:07:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.839 08:07:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.839 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:10.839 [2024-02-13 08:07:44.156575] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:10.839 [2024-02-13 08:07:44.156618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.839 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.839 [2024-02-13 08:07:44.222640] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.839 [2024-02-13 08:07:44.299329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.839 [2024-02-13 08:07:44.299442] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.839 [2024-02-13 08:07:44.299451] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.839 [2024-02-13 08:07:44.299457] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.839 [2024-02-13 08:07:44.299516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.839 [2024-02-13 08:07:44.299619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.839 [2024-02-13 08:07:44.299698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.839 [2024-02-13 08:07:44.299700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.407 08:07:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.407 08:07:44 -- common/autotest_common.sh@850 -- # return 0 00:07:11.407 08:07:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:11.407 08:07:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:11.407 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 08:07:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.407 08:07:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:11.407 08:07:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.407 08:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.407 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.407 [2024-02-13 08:07:44.991839] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.407 08:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.407 08:07:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.407 08:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.408 08:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.666 Malloc1 00:07:11.667 08:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.667 08:07:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.667 08:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.667 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 08:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.667 08:07:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.667 08:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.667 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 08:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.667 08:07:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.667 08:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.667 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 [2024-02-13 08:07:45.140865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.667 08:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.667 08:07:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.667 08:07:45 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:07:11.667 08:07:45 -- common/autotest_common.sh@1356 -- # local bdev_info 00:07:11.667 08:07:45 -- common/autotest_common.sh@1357 -- # local bs 00:07:11.667 08:07:45 -- common/autotest_common.sh@1358 -- # local nb 00:07:11.667 08:07:45 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.667 08:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.667 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 08:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.667 08:07:45 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:07:11.667 { 00:07:11.667 "name": "Malloc1", 00:07:11.667 "aliases": [ 00:07:11.667 "710f20bc-f506-4a29-aff9-447b518da653" 00:07:11.667 ], 00:07:11.667 "product_name": "Malloc disk", 00:07:11.667 "block_size": 512, 00:07:11.667 "num_blocks": 1048576, 00:07:11.667 "uuid": "710f20bc-f506-4a29-aff9-447b518da653", 00:07:11.667 "assigned_rate_limits": { 00:07:11.667 "rw_ios_per_sec": 0, 00:07:11.667 "rw_mbytes_per_sec": 0, 00:07:11.667 "r_mbytes_per_sec": 0, 00:07:11.667 "w_mbytes_per_sec": 0 00:07:11.667 }, 00:07:11.667 "claimed": true, 00:07:11.667 "claim_type": "exclusive_write", 00:07:11.667 "zoned": false, 00:07:11.667 "supported_io_types": { 00:07:11.667 "read": true, 00:07:11.667 "write": true, 00:07:11.667 "unmap": true, 00:07:11.667 "write_zeroes": true, 00:07:11.667 "flush": true, 00:07:11.667 "reset": true, 00:07:11.667 "compare": false, 00:07:11.667 "compare_and_write": false, 00:07:11.667 "abort": true, 00:07:11.667 "nvme_admin": false, 00:07:11.667 "nvme_io": false 00:07:11.667 }, 00:07:11.667 "memory_domains": [ 00:07:11.667 { 00:07:11.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.667 "dma_device_type": 2 00:07:11.667 } 00:07:11.667 ], 00:07:11.667 "driver_specific": {} 00:07:11.667 } 00:07:11.667 ]' 00:07:11.667 08:07:45 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:07:11.667 08:07:45 -- common/autotest_common.sh@1360 -- # bs=512 00:07:11.667 08:07:45 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:07:11.667 08:07:45 -- common/autotest_common.sh@1361 -- # nb=1048576 00:07:11.667 08:07:45 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:07:11.667 08:07:45 -- common/autotest_common.sh@1365 -- # echo 512 00:07:11.667 08:07:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.667 08:07:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.045 08:07:46 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.045 08:07:46 -- common/autotest_common.sh@1175 -- # local i=0 00:07:13.045 08:07:46 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.045 08:07:46 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:07:13.045 08:07:46 -- common/autotest_common.sh@1182 -- # sleep 2 00:07:14.951 08:07:48 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:07:14.951 08:07:48 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:07:14.951 08:07:48 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.951 08:07:48 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:07:14.951 08:07:48 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.951 08:07:48 -- common/autotest_common.sh@1185 -- # return 0 00:07:14.951 08:07:48 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:14.951 08:07:48 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:14.951 08:07:48 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:14.951 08:07:48 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:14.951 08:07:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:14.951 08:07:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:14.951 08:07:48 -- setup/common.sh@80 -- # echo 536870912 00:07:14.951 08:07:48 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:14.951 08:07:48 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:14.951 08:07:48 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:14.951 08:07:48 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:14.951 08:07:48 -- target/filesystem.sh@69 -- # partprobe 00:07:15.889 08:07:49 -- target/filesystem.sh@70 -- # sleep 1 00:07:16.826 08:07:50 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:16.826 08:07:50 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.826 08:07:50 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:16.826 08:07:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:16.826 08:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:16.826 ************************************ 00:07:16.826 START TEST filesystem_ext4 00:07:16.826 ************************************ 00:07:16.826 08:07:50 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:16.826 08:07:50 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:16.826 08:07:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.826 08:07:50 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:16.826 08:07:50 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:07:16.826 08:07:50 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:16.826 08:07:50 -- common/autotest_common.sh@902 -- # local i=0 00:07:16.826 08:07:50 -- common/autotest_common.sh@903 -- # local force 00:07:16.826 08:07:50 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:07:16.826 08:07:50 -- common/autotest_common.sh@906 -- # force=-F 00:07:16.826 08:07:50 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:16.826 mke2fs 1.46.5 (30-Dec-2021) 00:07:16.826 Discarding device blocks: 0/522240 done 00:07:16.826 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:16.826 Filesystem UUID: 3d8c5d5e-291e-42a1-85c9-074c3f321982 00:07:16.826 Superblock backups stored on blocks: 00:07:16.826 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:16.826 00:07:16.826 Allocating group tables: 0/64 done 00:07:16.826 Writing inode tables: 0/64 done 00:07:18.730 Creating journal (8192 blocks): done 00:07:18.730 Writing superblocks and filesystem accounting information: 0/64 done 00:07:18.730 00:07:18.730 08:07:51 -- common/autotest_common.sh@919 -- # return 0 00:07:18.730 08:07:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.298 08:07:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.298 08:07:52 -- target/filesystem.sh@25 -- # sync 00:07:19.298 08:07:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.298 08:07:52 -- target/filesystem.sh@27 -- # sync 00:07:19.298 08:07:52 -- target/filesystem.sh@29 -- # i=0 00:07:19.298 08:07:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.298 08:07:52 -- target/filesystem.sh@37 -- # kill -0 2105083 00:07:19.298 08:07:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.299 08:07:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.299 08:07:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.299 08:07:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.299 00:07:19.299 real 0m2.563s 00:07:19.299 user 0m0.023s 00:07:19.299 sys 0m0.069s 00:07:19.299 08:07:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.299 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.299 ************************************ 00:07:19.299 END TEST filesystem_ext4 00:07:19.299 ************************************ 00:07:19.299 08:07:52 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:19.299 08:07:52 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:19.299 08:07:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:19.299 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.299 ************************************ 00:07:19.299 START TEST filesystem_btrfs 00:07:19.299 ************************************ 00:07:19.299 08:07:52 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:19.299 08:07:52 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:19.299 08:07:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.299 08:07:52 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:19.299 08:07:52 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:07:19.299 08:07:52 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:19.299 08:07:52 -- common/autotest_common.sh@902 -- # local i=0 00:07:19.299 08:07:52 -- common/autotest_common.sh@903 -- # local force 00:07:19.299 08:07:52 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:07:19.299 08:07:52 -- common/autotest_common.sh@908 -- # force=-f 00:07:19.299 08:07:52 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:19.558 btrfs-progs v6.6.2 00:07:19.558 See https://btrfs.readthedocs.io for more information. 00:07:19.558 00:07:19.558 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:19.558 NOTE: several default settings have changed in version 5.15, please make sure 00:07:19.558 this does not affect your deployments: 00:07:19.558 - DUP for metadata (-m dup) 00:07:19.558 - enabled no-holes (-O no-holes) 00:07:19.558 - enabled free-space-tree (-R free-space-tree) 00:07:19.558 00:07:19.558 Label: (null) 00:07:19.558 UUID: e51227ed-da8a-42ee-904a-aa665a0bacea 00:07:19.558 Node size: 16384 00:07:19.558 Sector size: 4096 00:07:19.558 Filesystem size: 510.00MiB 00:07:19.558 Block group profiles: 00:07:19.558 Data: single 8.00MiB 00:07:19.558 Metadata: DUP 32.00MiB 00:07:19.558 System: DUP 8.00MiB 00:07:19.558 SSD detected: yes 00:07:19.558 Zoned device: no 00:07:19.558 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:19.558 Runtime features: free-space-tree 00:07:19.558 Checksum: crc32c 00:07:19.558 Number of devices: 1 00:07:19.558 Devices: 00:07:19.558 ID SIZE PATH 00:07:19.558 1 510.00MiB /dev/nvme0n1p1 00:07:19.558 00:07:19.558 08:07:53 -- common/autotest_common.sh@919 -- # return 0 00:07:19.558 08:07:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.495 08:07:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.495 08:07:53 -- target/filesystem.sh@25 -- # sync 00:07:20.495 08:07:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.495 08:07:53 -- target/filesystem.sh@27 -- # sync 00:07:20.495 08:07:53 -- target/filesystem.sh@29 -- # i=0 00:07:20.495 08:07:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.495 08:07:53 -- target/filesystem.sh@37 -- # kill -0 2105083 00:07:20.495 08:07:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.495 08:07:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.495 08:07:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.495 08:07:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.495 00:07:20.495 real 0m1.182s 00:07:20.495 user 0m0.023s 00:07:20.495 sys 0m0.126s 00:07:20.495 08:07:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.495 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.495 ************************************ 00:07:20.495 END TEST filesystem_btrfs 00:07:20.495 ************************************ 00:07:20.495 08:07:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:20.495 08:07:54 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:20.495 08:07:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:20.495 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:20.495 ************************************ 00:07:20.495 START TEST filesystem_xfs 00:07:20.495 ************************************ 00:07:20.495 08:07:54 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:07:20.495 08:07:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:20.495 08:07:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.495 08:07:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:20.495 08:07:54 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:07:20.495 08:07:54 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:20.495 08:07:54 -- common/autotest_common.sh@902 -- # local i=0 00:07:20.495 08:07:54 -- common/autotest_common.sh@903 -- # local force 00:07:20.495 08:07:54 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:07:20.495 08:07:54 -- common/autotest_common.sh@908 -- # force=-f 00:07:20.495 08:07:54 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:20.495 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:20.495 = sectsz=512 attr=2, projid32bit=1 00:07:20.495 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:20.495 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:20.495 data = bsize=4096 blocks=130560, imaxpct=25 00:07:20.495 = sunit=0 swidth=0 blks 00:07:20.495 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:20.495 log =internal log bsize=4096 blocks=16384, version=2 00:07:20.495 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:20.495 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:21.480 Discarding blocks...Done. 00:07:21.480 08:07:55 -- common/autotest_common.sh@919 -- # return 0 00:07:21.480 08:07:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.386 08:07:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.386 08:07:56 -- target/filesystem.sh@25 -- # sync 00:07:23.386 08:07:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.386 08:07:56 -- target/filesystem.sh@27 -- # sync 00:07:23.386 08:07:56 -- target/filesystem.sh@29 -- # i=0 00:07:23.386 08:07:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.386 08:07:56 -- target/filesystem.sh@37 -- # kill -0 2105083 00:07:23.386 08:07:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.386 08:07:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.386 08:07:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.386 08:07:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.386 00:07:23.386 real 0m2.888s 00:07:23.386 user 0m0.020s 00:07:23.386 sys 0m0.074s 00:07:23.386 08:07:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.386 08:07:56 -- common/autotest_common.sh@10 -- # set +x 00:07:23.386 ************************************ 00:07:23.386 END TEST filesystem_xfs 00:07:23.386 ************************************ 00:07:23.386 08:07:56 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:23.645 08:07:57 -- target/filesystem.sh@93 -- # sync 00:07:23.645 08:07:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.645 08:07:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.645 08:07:57 -- common/autotest_common.sh@1196 -- # local i=0 00:07:23.645 08:07:57 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:07:23.645 08:07:57 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.645 08:07:57 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:23.645 08:07:57 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.645 08:07:57 -- common/autotest_common.sh@1208 -- # return 0 00:07:23.645 08:07:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.645 08:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.645 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:23.645 08:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.645 08:07:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:23.645 08:07:57 -- target/filesystem.sh@101 -- # killprocess 2105083 00:07:23.645 08:07:57 -- common/autotest_common.sh@924 -- # '[' -z 2105083 ']' 00:07:23.645 08:07:57 -- common/autotest_common.sh@928 -- # kill -0 2105083 00:07:23.645 08:07:57 -- common/autotest_common.sh@929 -- # uname 00:07:23.645 08:07:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:23.645 08:07:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2105083 00:07:23.645 08:07:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:23.645 08:07:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:23.645 08:07:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2105083' 00:07:23.645 killing process with pid 2105083 00:07:23.645 08:07:57 -- common/autotest_common.sh@943 -- # kill 2105083 00:07:23.645 08:07:57 -- common/autotest_common.sh@948 -- # wait 2105083 00:07:24.215 08:07:57 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:24.215 00:07:24.215 real 0m13.509s 00:07:24.215 user 0m52.976s 00:07:24.215 sys 0m1.223s 00:07:24.215 08:07:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.215 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 ************************************ 00:07:24.215 END TEST nvmf_filesystem_no_in_capsule 00:07:24.215 ************************************ 00:07:24.215 08:07:57 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:24.215 08:07:57 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:24.215 08:07:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:24.215 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 ************************************ 00:07:24.215 START TEST nvmf_filesystem_in_capsule 00:07:24.215 ************************************ 00:07:24.215 08:07:57 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_part 4096 00:07:24.215 08:07:57 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:24.215 08:07:57 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:24.215 08:07:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:24.215 08:07:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:24.215 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 08:07:57 -- nvmf/common.sh@469 -- # nvmfpid=2107555 00:07:24.215 08:07:57 -- nvmf/common.sh@470 -- # waitforlisten 2107555 00:07:24.215 08:07:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.215 08:07:57 -- common/autotest_common.sh@817 -- # '[' -z 2107555 ']' 00:07:24.215 08:07:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.215 08:07:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:24.215 08:07:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.215 08:07:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:24.215 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:24.215 [2024-02-13 08:07:57.705841] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:24.215 [2024-02-13 08:07:57.705885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.215 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.215 [2024-02-13 08:07:57.769457] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.215 [2024-02-13 08:07:57.835638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.215 [2024-02-13 08:07:57.835762] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.215 [2024-02-13 08:07:57.835770] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.215 [2024-02-13 08:07:57.835781] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.215 [2024-02-13 08:07:57.835826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.215 [2024-02-13 08:07:57.835933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.215 [2024-02-13 08:07:57.836002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.215 [2024-02-13 08:07:57.836003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.153 08:07:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.153 08:07:58 -- common/autotest_common.sh@850 -- # return 0 00:07:25.153 08:07:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:25.153 08:07:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 08:07:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.153 08:07:58 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:25.153 08:07:58 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 [2024-02-13 08:07:58.542031] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 [2024-02-13 08:07:58.689066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@1355 -- # local bdev_name=Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@1356 -- # local bdev_info 00:07:25.153 08:07:58 -- common/autotest_common.sh@1357 -- # local bs 00:07:25.153 08:07:58 -- common/autotest_common.sh@1358 -- # local nb 00:07:25.153 08:07:58 -- common/autotest_common.sh@1359 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:25.153 08:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.153 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 08:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.153 08:07:58 -- common/autotest_common.sh@1359 -- # bdev_info='[ 00:07:25.153 { 00:07:25.153 "name": "Malloc1", 00:07:25.153 "aliases": [ 00:07:25.153 "09ca384d-9b80-4f6c-9b84-eb28c097d6e9" 00:07:25.153 ], 00:07:25.153 "product_name": "Malloc disk", 00:07:25.153 "block_size": 512, 00:07:25.153 "num_blocks": 1048576, 00:07:25.153 "uuid": "09ca384d-9b80-4f6c-9b84-eb28c097d6e9", 00:07:25.153 "assigned_rate_limits": { 00:07:25.153 "rw_ios_per_sec": 0, 00:07:25.153 "rw_mbytes_per_sec": 0, 00:07:25.153 "r_mbytes_per_sec": 0, 00:07:25.153 "w_mbytes_per_sec": 0 00:07:25.153 }, 00:07:25.153 "claimed": true, 00:07:25.153 "claim_type": "exclusive_write", 00:07:25.153 "zoned": false, 00:07:25.153 "supported_io_types": { 00:07:25.153 "read": true, 00:07:25.153 "write": true, 00:07:25.153 "unmap": true, 00:07:25.153 "write_zeroes": true, 00:07:25.153 "flush": true, 00:07:25.153 "reset": true, 00:07:25.153 "compare": false, 00:07:25.153 "compare_and_write": false, 00:07:25.153 "abort": true, 00:07:25.153 "nvme_admin": false, 00:07:25.153 "nvme_io": false 00:07:25.153 }, 00:07:25.153 "memory_domains": [ 00:07:25.153 { 00:07:25.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.153 "dma_device_type": 2 00:07:25.153 } 00:07:25.153 ], 00:07:25.153 "driver_specific": {} 00:07:25.153 } 00:07:25.153 ]' 00:07:25.153 08:07:58 -- common/autotest_common.sh@1360 -- # jq '.[] .block_size' 00:07:25.153 08:07:58 -- common/autotest_common.sh@1360 -- # bs=512 00:07:25.153 08:07:58 -- common/autotest_common.sh@1361 -- # jq '.[] .num_blocks' 00:07:25.153 08:07:58 -- common/autotest_common.sh@1361 -- # nb=1048576 00:07:25.153 08:07:58 -- common/autotest_common.sh@1364 -- # bdev_size=512 00:07:25.153 08:07:58 -- common/autotest_common.sh@1365 -- # echo 512 00:07:25.153 08:07:58 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:25.153 08:07:58 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.531 08:07:59 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.531 08:07:59 -- common/autotest_common.sh@1175 -- # local i=0 00:07:26.531 08:07:59 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.531 08:07:59 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:07:26.532 08:08:00 -- common/autotest_common.sh@1182 -- # sleep 2 00:07:28.436 08:08:02 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:07:28.436 08:08:02 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:07:28.437 08:08:02 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.437 08:08:02 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:07:28.437 08:08:02 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.437 08:08:02 -- common/autotest_common.sh@1185 -- # return 0 00:07:28.437 08:08:02 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:28.437 08:08:02 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:28.437 08:08:02 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:28.437 08:08:02 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:28.437 08:08:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:28.437 08:08:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:28.437 08:08:02 -- setup/common.sh@80 -- # echo 536870912 00:07:28.437 08:08:02 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:28.437 08:08:02 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:28.437 08:08:02 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:28.437 08:08:02 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:28.697 08:08:02 -- target/filesystem.sh@69 -- # partprobe 00:07:29.263 08:08:02 -- target/filesystem.sh@70 -- # sleep 1 00:07:30.200 08:08:03 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:30.200 08:08:03 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:30.200 08:08:03 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:30.200 08:08:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:30.200 08:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:30.200 ************************************ 00:07:30.200 START TEST filesystem_in_capsule_ext4 00:07:30.200 ************************************ 00:07:30.200 08:08:03 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:30.200 08:08:03 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:30.200 08:08:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.200 08:08:03 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:30.201 08:08:03 -- common/autotest_common.sh@900 -- # local fstype=ext4 00:07:30.201 08:08:03 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:30.201 08:08:03 -- common/autotest_common.sh@902 -- # local i=0 00:07:30.201 08:08:03 -- common/autotest_common.sh@903 -- # local force 00:07:30.201 08:08:03 -- common/autotest_common.sh@905 -- # '[' ext4 = ext4 ']' 00:07:30.201 08:08:03 -- common/autotest_common.sh@906 -- # force=-F 00:07:30.201 08:08:03 -- common/autotest_common.sh@911 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:30.201 mke2fs 1.46.5 (30-Dec-2021) 00:07:30.201 Discarding device blocks: 0/522240 done 00:07:30.201 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:30.201 Filesystem UUID: 8c573401-a626-4acb-95fb-8f4fb92108f5 00:07:30.201 Superblock backups stored on blocks: 00:07:30.201 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:30.201 00:07:30.201 Allocating group tables: 0/64 done 00:07:30.201 Writing inode tables: 0/64 done 00:07:30.201 Creating journal (8192 blocks): done 00:07:30.459 Writing superblocks and filesystem accounting information: 0/64 done 00:07:30.459 00:07:30.459 08:08:03 -- common/autotest_common.sh@919 -- # return 0 00:07:30.459 08:08:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.459 08:08:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.459 08:08:04 -- target/filesystem.sh@25 -- # sync 00:07:30.459 08:08:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.459 08:08:04 -- target/filesystem.sh@27 -- # sync 00:07:30.459 08:08:04 -- target/filesystem.sh@29 -- # i=0 00:07:30.459 08:08:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.459 08:08:04 -- target/filesystem.sh@37 -- # kill -0 2107555 00:07:30.459 08:08:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.459 08:08:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.718 08:08:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.718 08:08:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.718 00:07:30.718 real 0m0.496s 00:07:30.718 user 0m0.036s 00:07:30.718 sys 0m0.053s 00:07:30.718 08:08:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.718 08:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 ************************************ 00:07:30.718 END TEST filesystem_in_capsule_ext4 00:07:30.718 ************************************ 00:07:30.718 08:08:04 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:30.718 08:08:04 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:30.718 08:08:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:30.718 08:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 ************************************ 00:07:30.718 START TEST filesystem_in_capsule_btrfs 00:07:30.718 ************************************ 00:07:30.718 08:08:04 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:30.718 08:08:04 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:30.718 08:08:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.718 08:08:04 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:30.718 08:08:04 -- common/autotest_common.sh@900 -- # local fstype=btrfs 00:07:30.718 08:08:04 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:30.718 08:08:04 -- common/autotest_common.sh@902 -- # local i=0 00:07:30.718 08:08:04 -- common/autotest_common.sh@903 -- # local force 00:07:30.718 08:08:04 -- common/autotest_common.sh@905 -- # '[' btrfs = ext4 ']' 00:07:30.718 08:08:04 -- common/autotest_common.sh@908 -- # force=-f 00:07:30.719 08:08:04 -- common/autotest_common.sh@911 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:30.719 btrfs-progs v6.6.2 00:07:30.719 See https://btrfs.readthedocs.io for more information. 00:07:30.719 00:07:30.719 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:30.719 NOTE: several default settings have changed in version 5.15, please make sure 00:07:30.719 this does not affect your deployments: 00:07:30.719 - DUP for metadata (-m dup) 00:07:30.719 - enabled no-holes (-O no-holes) 00:07:30.719 - enabled free-space-tree (-R free-space-tree) 00:07:30.719 00:07:30.719 Label: (null) 00:07:30.719 UUID: a611459d-7586-4a3a-8475-e422931fdeff 00:07:30.719 Node size: 16384 00:07:30.719 Sector size: 4096 00:07:30.719 Filesystem size: 510.00MiB 00:07:30.719 Block group profiles: 00:07:30.719 Data: single 8.00MiB 00:07:30.719 Metadata: DUP 32.00MiB 00:07:30.719 System: DUP 8.00MiB 00:07:30.719 SSD detected: yes 00:07:30.719 Zoned device: no 00:07:30.719 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:30.719 Runtime features: free-space-tree 00:07:30.719 Checksum: crc32c 00:07:30.719 Number of devices: 1 00:07:30.719 Devices: 00:07:30.719 ID SIZE PATH 00:07:30.719 1 510.00MiB /dev/nvme0n1p1 00:07:30.719 00:07:30.719 08:08:04 -- common/autotest_common.sh@919 -- # return 0 00:07:30.719 08:08:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.654 08:08:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.654 08:08:05 -- target/filesystem.sh@25 -- # sync 00:07:31.654 08:08:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.654 08:08:05 -- target/filesystem.sh@27 -- # sync 00:07:31.654 08:08:05 -- target/filesystem.sh@29 -- # i=0 00:07:31.654 08:08:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.654 08:08:05 -- target/filesystem.sh@37 -- # kill -0 2107555 00:07:31.654 08:08:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.654 08:08:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.654 08:08:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.654 08:08:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.654 00:07:31.654 real 0m1.126s 00:07:31.654 user 0m0.018s 00:07:31.654 sys 0m0.134s 00:07:31.654 08:08:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.654 08:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.654 ************************************ 00:07:31.654 END TEST filesystem_in_capsule_btrfs 00:07:31.654 ************************************ 00:07:31.913 08:08:05 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:31.913 08:08:05 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:31.913 08:08:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:31.913 08:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:31.913 ************************************ 00:07:31.913 START TEST filesystem_in_capsule_xfs 00:07:31.913 ************************************ 00:07:31.913 08:08:05 -- common/autotest_common.sh@1102 -- # nvmf_filesystem_create xfs nvme0n1 00:07:31.913 08:08:05 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:31.913 08:08:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.913 08:08:05 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:31.913 08:08:05 -- common/autotest_common.sh@900 -- # local fstype=xfs 00:07:31.913 08:08:05 -- common/autotest_common.sh@901 -- # local dev_name=/dev/nvme0n1p1 00:07:31.913 08:08:05 -- common/autotest_common.sh@902 -- # local i=0 00:07:31.913 08:08:05 -- common/autotest_common.sh@903 -- # local force 00:07:31.913 08:08:05 -- common/autotest_common.sh@905 -- # '[' xfs = ext4 ']' 00:07:31.913 08:08:05 -- common/autotest_common.sh@908 -- # force=-f 00:07:31.913 08:08:05 -- common/autotest_common.sh@911 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:31.913 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:31.913 = sectsz=512 attr=2, projid32bit=1 00:07:31.913 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:31.913 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:31.913 data = bsize=4096 blocks=130560, imaxpct=25 00:07:31.913 = sunit=0 swidth=0 blks 00:07:31.913 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:31.913 log =internal log bsize=4096 blocks=16384, version=2 00:07:31.913 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:31.913 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:32.850 Discarding blocks...Done. 00:07:32.850 08:08:06 -- common/autotest_common.sh@919 -- # return 0 00:07:32.850 08:08:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.384 08:08:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.384 08:08:08 -- target/filesystem.sh@25 -- # sync 00:07:35.384 08:08:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.384 08:08:09 -- target/filesystem.sh@27 -- # sync 00:07:35.384 08:08:09 -- target/filesystem.sh@29 -- # i=0 00:07:35.384 08:08:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.384 08:08:09 -- target/filesystem.sh@37 -- # kill -0 2107555 00:07:35.384 08:08:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.384 08:08:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.384 08:08:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.384 08:08:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.384 00:07:35.384 real 0m3.675s 00:07:35.384 user 0m0.025s 00:07:35.384 sys 0m0.070s 00:07:35.384 08:08:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.384 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:35.384 ************************************ 00:07:35.384 END TEST filesystem_in_capsule_xfs 00:07:35.384 ************************************ 00:07:35.643 08:08:09 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:35.903 08:08:09 -- target/filesystem.sh@93 -- # sync 00:07:35.903 08:08:09 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.903 08:08:09 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.903 08:08:09 -- common/autotest_common.sh@1196 -- # local i=0 00:07:35.903 08:08:09 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:07:35.903 08:08:09 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.903 08:08:09 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:35.903 08:08:09 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.903 08:08:09 -- common/autotest_common.sh@1208 -- # return 0 00:07:35.903 08:08:09 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.903 08:08:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.903 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:35.903 08:08:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.903 08:08:09 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.903 08:08:09 -- target/filesystem.sh@101 -- # killprocess 2107555 00:07:35.903 08:08:09 -- common/autotest_common.sh@924 -- # '[' -z 2107555 ']' 00:07:35.903 08:08:09 -- common/autotest_common.sh@928 -- # kill -0 2107555 00:07:35.903 08:08:09 -- common/autotest_common.sh@929 -- # uname 00:07:35.903 08:08:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:35.903 08:08:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2107555 00:07:35.903 08:08:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:35.903 08:08:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:35.903 08:08:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2107555' 00:07:35.903 killing process with pid 2107555 00:07:35.903 08:08:09 -- common/autotest_common.sh@943 -- # kill 2107555 00:07:35.903 08:08:09 -- common/autotest_common.sh@948 -- # wait 2107555 00:07:36.474 08:08:09 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:36.474 00:07:36.474 real 0m12.277s 00:07:36.474 user 0m48.146s 00:07:36.474 sys 0m1.178s 00:07:36.474 08:08:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.474 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:36.474 ************************************ 00:07:36.474 END TEST nvmf_filesystem_in_capsule 00:07:36.474 ************************************ 00:07:36.474 08:08:09 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:36.474 08:08:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:36.474 08:08:09 -- nvmf/common.sh@116 -- # sync 00:07:36.474 08:08:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:36.474 08:08:09 -- nvmf/common.sh@119 -- # set +e 00:07:36.474 08:08:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:36.474 08:08:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:36.474 rmmod nvme_tcp 00:07:36.474 rmmod nvme_fabrics 00:07:36.474 rmmod nvme_keyring 00:07:36.474 08:08:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:36.474 08:08:10 -- nvmf/common.sh@123 -- # set -e 00:07:36.474 08:08:10 -- nvmf/common.sh@124 -- # return 0 00:07:36.474 08:08:10 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:36.474 08:08:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:36.474 08:08:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:36.474 08:08:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:36.474 08:08:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.474 08:08:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:36.474 08:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.474 08:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.474 08:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.011 08:08:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:39.011 00:07:39.011 real 0m34.433s 00:07:39.011 user 1m43.031s 00:07:39.011 sys 0m7.160s 00:07:39.011 08:08:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.011 08:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:39.011 ************************************ 00:07:39.011 END TEST nvmf_filesystem 00:07:39.011 ************************************ 00:07:39.011 08:08:12 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:39.011 08:08:12 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:39.011 08:08:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:39.011 08:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:39.011 ************************************ 00:07:39.011 START TEST nvmf_discovery 00:07:39.011 ************************************ 00:07:39.011 08:08:12 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:39.011 * Looking for test storage... 00:07:39.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.011 08:08:12 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.011 08:08:12 -- nvmf/common.sh@7 -- # uname -s 00:07:39.011 08:08:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.011 08:08:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.011 08:08:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.011 08:08:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.011 08:08:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.011 08:08:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.011 08:08:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.011 08:08:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.011 08:08:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.011 08:08:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.011 08:08:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:39.011 08:08:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:39.011 08:08:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.011 08:08:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.011 08:08:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.011 08:08:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.011 08:08:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.011 08:08:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.011 08:08:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.011 08:08:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.011 08:08:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.011 08:08:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.011 08:08:12 -- paths/export.sh@5 -- # export PATH 00:07:39.011 08:08:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.011 08:08:12 -- nvmf/common.sh@46 -- # : 0 00:07:39.011 08:08:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:39.011 08:08:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:39.011 08:08:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:39.011 08:08:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.011 08:08:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.011 08:08:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:39.011 08:08:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:39.011 08:08:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:39.011 08:08:12 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:39.011 08:08:12 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:39.011 08:08:12 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:39.011 08:08:12 -- target/discovery.sh@15 -- # hash nvme 00:07:39.011 08:08:12 -- target/discovery.sh@20 -- # nvmftestinit 00:07:39.011 08:08:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:39.011 08:08:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.011 08:08:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:39.011 08:08:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:39.011 08:08:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:39.011 08:08:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.011 08:08:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.011 08:08:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.011 08:08:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:39.011 08:08:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:39.011 08:08:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:39.011 08:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:44.348 08:08:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:44.348 08:08:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:44.348 08:08:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:44.348 08:08:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:44.348 08:08:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:44.348 08:08:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:44.348 08:08:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:44.348 08:08:17 -- nvmf/common.sh@294 -- # net_devs=() 00:07:44.348 08:08:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:44.348 08:08:17 -- nvmf/common.sh@295 -- # e810=() 00:07:44.348 08:08:17 -- nvmf/common.sh@295 -- # local -ga e810 00:07:44.348 08:08:17 -- nvmf/common.sh@296 -- # x722=() 00:07:44.348 08:08:17 -- nvmf/common.sh@296 -- # local -ga x722 00:07:44.348 08:08:17 -- nvmf/common.sh@297 -- # mlx=() 00:07:44.348 08:08:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:44.348 08:08:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.348 08:08:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:44.348 08:08:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:44.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:44.348 08:08:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:44.348 08:08:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:44.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:44.348 08:08:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:44.348 08:08:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.348 08:08:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.348 08:08:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:44.348 Found net devices under 0000:af:00.0: cvl_0_0 00:07:44.348 08:08:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:44.348 08:08:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.348 08:08:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.348 08:08:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:44.348 Found net devices under 0000:af:00.1: cvl_0_1 00:07:44.348 08:08:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:44.348 08:08:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:44.348 08:08:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.348 08:08:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.348 08:08:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:44.348 08:08:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.348 08:08:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.348 08:08:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:44.348 08:08:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.348 08:08:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.348 08:08:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:44.348 08:08:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:44.348 08:08:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.348 08:08:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.348 08:08:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.348 08:08:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.348 08:08:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:44.348 08:08:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.348 08:08:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.348 08:08:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.348 08:08:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:44.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:07:44.348 00:07:44.348 --- 10.0.0.2 ping statistics --- 00:07:44.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.348 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:44.348 08:08:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:07:44.348 00:07:44.348 --- 10.0.0.1 ping statistics --- 00:07:44.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.348 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:07:44.348 08:08:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.348 08:08:17 -- nvmf/common.sh@410 -- # return 0 00:07:44.348 08:08:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:44.348 08:08:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.348 08:08:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:44.348 08:08:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.348 08:08:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:44.348 08:08:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:44.348 08:08:17 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:44.348 08:08:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:44.348 08:08:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:44.348 08:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:44.348 08:08:17 -- nvmf/common.sh@469 -- # nvmfpid=2113631 00:07:44.348 08:08:17 -- nvmf/common.sh@470 -- # waitforlisten 2113631 00:07:44.348 08:08:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.348 08:08:17 -- common/autotest_common.sh@817 -- # '[' -z 2113631 ']' 00:07:44.348 08:08:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.348 08:08:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:44.348 08:08:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.348 08:08:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:44.348 08:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:44.348 [2024-02-13 08:08:18.007958] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:44.348 [2024-02-13 08:08:18.007999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.608 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.608 [2024-02-13 08:08:18.074456] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.608 [2024-02-13 08:08:18.148582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.608 [2024-02-13 08:08:18.148699] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.608 [2024-02-13 08:08:18.148707] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.608 [2024-02-13 08:08:18.148713] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.608 [2024-02-13 08:08:18.148756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.608 [2024-02-13 08:08:18.148776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.608 [2024-02-13 08:08:18.148867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.608 [2024-02-13 08:08:18.148869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.176 08:08:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:45.176 08:08:18 -- common/autotest_common.sh@850 -- # return 0 00:07:45.176 08:08:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:45.176 08:08:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:45.176 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.176 08:08:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.176 08:08:18 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.176 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.176 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.176 [2024-02-13 08:08:18.857928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.176 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@26 -- # seq 1 4 00:07:45.436 08:08:18 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.436 08:08:18 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 Null1 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 [2024-02-13 08:08:18.903342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.436 08:08:18 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 Null2 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.436 08:08:18 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 Null3 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:45.436 08:08:18 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 Null4 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:18 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 08:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:19 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:19 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:45.436 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.436 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.436 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.436 08:08:19 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:45.436 00:07:45.436 Discovery Log Number of Records 6, Generation counter 6 00:07:45.436 =====Discovery Log Entry 0====== 00:07:45.436 trtype: tcp 00:07:45.436 adrfam: ipv4 00:07:45.436 subtype: current discovery subsystem 00:07:45.436 treq: not required 00:07:45.436 portid: 0 00:07:45.436 trsvcid: 4420 00:07:45.436 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.436 traddr: 10.0.0.2 00:07:45.436 eflags: explicit discovery connections, duplicate discovery information 00:07:45.436 sectype: none 00:07:45.436 =====Discovery Log Entry 1====== 00:07:45.436 trtype: tcp 00:07:45.436 adrfam: ipv4 00:07:45.436 subtype: nvme subsystem 00:07:45.436 treq: not required 00:07:45.436 portid: 0 00:07:45.436 trsvcid: 4420 00:07:45.436 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:45.436 traddr: 10.0.0.2 00:07:45.436 eflags: none 00:07:45.436 sectype: none 00:07:45.436 =====Discovery Log Entry 2====== 00:07:45.436 trtype: tcp 00:07:45.436 adrfam: ipv4 00:07:45.436 subtype: nvme subsystem 00:07:45.436 treq: not required 00:07:45.436 portid: 0 00:07:45.436 trsvcid: 4420 00:07:45.436 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:45.436 traddr: 10.0.0.2 00:07:45.436 eflags: none 00:07:45.436 sectype: none 00:07:45.436 =====Discovery Log Entry 3====== 00:07:45.437 trtype: tcp 00:07:45.437 adrfam: ipv4 00:07:45.437 subtype: nvme subsystem 00:07:45.437 treq: not required 00:07:45.437 portid: 0 00:07:45.437 trsvcid: 4420 00:07:45.437 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:45.437 traddr: 10.0.0.2 00:07:45.437 eflags: none 00:07:45.437 sectype: none 00:07:45.437 =====Discovery Log Entry 4====== 00:07:45.437 trtype: tcp 00:07:45.437 adrfam: ipv4 00:07:45.437 subtype: nvme subsystem 00:07:45.437 treq: not required 00:07:45.437 portid: 0 00:07:45.437 trsvcid: 4420 00:07:45.437 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:45.437 traddr: 10.0.0.2 00:07:45.437 eflags: none 00:07:45.437 sectype: none 00:07:45.437 =====Discovery Log Entry 5====== 00:07:45.437 trtype: tcp 00:07:45.437 adrfam: ipv4 00:07:45.437 subtype: discovery subsystem referral 00:07:45.437 treq: not required 00:07:45.437 portid: 0 00:07:45.437 trsvcid: 4430 00:07:45.437 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:45.437 traddr: 10.0.0.2 00:07:45.437 eflags: none 00:07:45.437 sectype: none 00:07:45.437 08:08:19 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:45.437 Perform nvmf subsystem discovery via RPC 00:07:45.437 08:08:19 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:45.437 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.437 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.437 [2024-02-13 08:08:19.119937] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:45.697 [ 00:07:45.697 { 00:07:45.697 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:45.697 "subtype": "Discovery", 00:07:45.697 "listen_addresses": [ 00:07:45.697 { 00:07:45.697 "transport": "TCP", 00:07:45.697 "trtype": "TCP", 00:07:45.697 "adrfam": "IPv4", 00:07:45.697 "traddr": "10.0.0.2", 00:07:45.697 "trsvcid": "4420" 00:07:45.697 } 00:07:45.697 ], 00:07:45.697 "allow_any_host": true, 00:07:45.697 "hosts": [] 00:07:45.697 }, 00:07:45.697 { 00:07:45.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.697 "subtype": "NVMe", 00:07:45.697 "listen_addresses": [ 00:07:45.697 { 00:07:45.697 "transport": "TCP", 00:07:45.697 "trtype": "TCP", 00:07:45.697 "adrfam": "IPv4", 00:07:45.697 "traddr": "10.0.0.2", 00:07:45.697 "trsvcid": "4420" 00:07:45.697 } 00:07:45.697 ], 00:07:45.697 "allow_any_host": true, 00:07:45.697 "hosts": [], 00:07:45.697 "serial_number": "SPDK00000000000001", 00:07:45.697 "model_number": "SPDK bdev Controller", 00:07:45.697 "max_namespaces": 32, 00:07:45.697 "min_cntlid": 1, 00:07:45.697 "max_cntlid": 65519, 00:07:45.697 "namespaces": [ 00:07:45.697 { 00:07:45.697 "nsid": 1, 00:07:45.697 "bdev_name": "Null1", 00:07:45.697 "name": "Null1", 00:07:45.697 "nguid": "40620759036D430C8B65A7D162E6BC63", 00:07:45.697 "uuid": "40620759-036d-430c-8b65-a7d162e6bc63" 00:07:45.697 } 00:07:45.697 ] 00:07:45.697 }, 00:07:45.697 { 00:07:45.697 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:45.697 "subtype": "NVMe", 00:07:45.697 "listen_addresses": [ 00:07:45.697 { 00:07:45.697 "transport": "TCP", 00:07:45.697 "trtype": "TCP", 00:07:45.697 "adrfam": "IPv4", 00:07:45.697 "traddr": "10.0.0.2", 00:07:45.697 "trsvcid": "4420" 00:07:45.697 } 00:07:45.697 ], 00:07:45.697 "allow_any_host": true, 00:07:45.697 "hosts": [], 00:07:45.697 "serial_number": "SPDK00000000000002", 00:07:45.697 "model_number": "SPDK bdev Controller", 00:07:45.697 "max_namespaces": 32, 00:07:45.697 "min_cntlid": 1, 00:07:45.697 "max_cntlid": 65519, 00:07:45.697 "namespaces": [ 00:07:45.697 { 00:07:45.697 "nsid": 1, 00:07:45.697 "bdev_name": "Null2", 00:07:45.697 "name": "Null2", 00:07:45.697 "nguid": "F4FF15FFE30D436AB381433C4DBAEDDD", 00:07:45.697 "uuid": "f4ff15ff-e30d-436a-b381-433c4dbaeddd" 00:07:45.697 } 00:07:45.697 ] 00:07:45.697 }, 00:07:45.697 { 00:07:45.697 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:45.697 "subtype": "NVMe", 00:07:45.697 "listen_addresses": [ 00:07:45.697 { 00:07:45.697 "transport": "TCP", 00:07:45.697 "trtype": "TCP", 00:07:45.697 "adrfam": "IPv4", 00:07:45.697 "traddr": "10.0.0.2", 00:07:45.697 "trsvcid": "4420" 00:07:45.697 } 00:07:45.697 ], 00:07:45.697 "allow_any_host": true, 00:07:45.697 "hosts": [], 00:07:45.697 "serial_number": "SPDK00000000000003", 00:07:45.697 "model_number": "SPDK bdev Controller", 00:07:45.697 "max_namespaces": 32, 00:07:45.697 "min_cntlid": 1, 00:07:45.697 "max_cntlid": 65519, 00:07:45.697 "namespaces": [ 00:07:45.697 { 00:07:45.697 "nsid": 1, 00:07:45.697 "bdev_name": "Null3", 00:07:45.697 "name": "Null3", 00:07:45.697 "nguid": "09E820ACA08A452C8FB11094A287151D", 00:07:45.697 "uuid": "09e820ac-a08a-452c-8fb1-1094a287151d" 00:07:45.697 } 00:07:45.697 ] 00:07:45.697 }, 00:07:45.697 { 00:07:45.697 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:45.697 "subtype": "NVMe", 00:07:45.697 "listen_addresses": [ 00:07:45.697 { 00:07:45.697 "transport": "TCP", 00:07:45.697 "trtype": "TCP", 00:07:45.697 "adrfam": "IPv4", 00:07:45.697 "traddr": "10.0.0.2", 00:07:45.697 "trsvcid": "4420" 00:07:45.697 } 00:07:45.697 ], 00:07:45.697 "allow_any_host": true, 00:07:45.697 "hosts": [], 00:07:45.697 "serial_number": "SPDK00000000000004", 00:07:45.697 "model_number": "SPDK bdev Controller", 00:07:45.697 "max_namespaces": 32, 00:07:45.697 "min_cntlid": 1, 00:07:45.697 "max_cntlid": 65519, 00:07:45.697 "namespaces": [ 00:07:45.697 { 00:07:45.697 "nsid": 1, 00:07:45.697 "bdev_name": "Null4", 00:07:45.697 "name": "Null4", 00:07:45.697 "nguid": "8B9D2279CCBB4AF7B03C6B81286B5743", 00:07:45.697 "uuid": "8b9d2279-ccbb-4af7-b03c-6b81286b5743" 00:07:45.697 } 00:07:45.697 ] 00:07:45.697 } 00:07:45.697 ] 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@42 -- # seq 1 4 00:07:45.697 08:08:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.697 08:08:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.697 08:08:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.697 08:08:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:45.697 08:08:19 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:45.697 08:08:19 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:45.697 08:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.697 08:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.697 08:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.697 08:08:19 -- target/discovery.sh@49 -- # check_bdevs= 00:07:45.697 08:08:19 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:45.697 08:08:19 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:45.697 08:08:19 -- target/discovery.sh@57 -- # nvmftestfini 00:07:45.697 08:08:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:45.697 08:08:19 -- nvmf/common.sh@116 -- # sync 00:07:45.697 08:08:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:45.697 08:08:19 -- nvmf/common.sh@119 -- # set +e 00:07:45.697 08:08:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:45.697 08:08:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:45.697 rmmod nvme_tcp 00:07:45.697 rmmod nvme_fabrics 00:07:45.697 rmmod nvme_keyring 00:07:45.697 08:08:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:45.697 08:08:19 -- nvmf/common.sh@123 -- # set -e 00:07:45.697 08:08:19 -- nvmf/common.sh@124 -- # return 0 00:07:45.697 08:08:19 -- nvmf/common.sh@477 -- # '[' -n 2113631 ']' 00:07:45.697 08:08:19 -- nvmf/common.sh@478 -- # killprocess 2113631 00:07:45.697 08:08:19 -- common/autotest_common.sh@924 -- # '[' -z 2113631 ']' 00:07:45.697 08:08:19 -- common/autotest_common.sh@928 -- # kill -0 2113631 00:07:45.697 08:08:19 -- common/autotest_common.sh@929 -- # uname 00:07:45.697 08:08:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:45.697 08:08:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2113631 00:07:45.697 08:08:19 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:45.697 08:08:19 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:45.697 08:08:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2113631' 00:07:45.697 killing process with pid 2113631 00:07:45.697 08:08:19 -- common/autotest_common.sh@943 -- # kill 2113631 00:07:45.698 [2024-02-13 08:08:19.357183] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:45.698 08:08:19 -- common/autotest_common.sh@948 -- # wait 2113631 00:07:45.962 08:08:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:45.962 08:08:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:45.962 08:08:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:45.962 08:08:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.962 08:08:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:45.962 08:08:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.962 08:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.962 08:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.500 08:08:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:48.500 00:07:48.500 real 0m9.502s 00:07:48.500 user 0m7.207s 00:07:48.500 sys 0m4.602s 00:07:48.500 08:08:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.500 08:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.500 ************************************ 00:07:48.500 END TEST nvmf_discovery 00:07:48.500 ************************************ 00:07:48.500 08:08:21 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:48.500 08:08:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:48.500 08:08:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:48.500 08:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.500 ************************************ 00:07:48.500 START TEST nvmf_referrals 00:07:48.500 ************************************ 00:07:48.500 08:08:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:48.500 * Looking for test storage... 00:07:48.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.500 08:08:21 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.500 08:08:21 -- nvmf/common.sh@7 -- # uname -s 00:07:48.500 08:08:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.500 08:08:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.500 08:08:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.500 08:08:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.500 08:08:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.500 08:08:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.500 08:08:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.500 08:08:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.500 08:08:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.500 08:08:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.500 08:08:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:48.500 08:08:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:48.500 08:08:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.500 08:08:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.500 08:08:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.500 08:08:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.500 08:08:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.500 08:08:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.500 08:08:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.500 08:08:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.500 08:08:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.500 08:08:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.500 08:08:21 -- paths/export.sh@5 -- # export PATH 00:07:48.500 08:08:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.500 08:08:21 -- nvmf/common.sh@46 -- # : 0 00:07:48.500 08:08:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.500 08:08:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.500 08:08:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.500 08:08:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.500 08:08:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.500 08:08:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.500 08:08:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.500 08:08:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.500 08:08:21 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:48.500 08:08:21 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:48.500 08:08:21 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:48.500 08:08:21 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:48.500 08:08:21 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:48.500 08:08:21 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:48.500 08:08:21 -- target/referrals.sh@37 -- # nvmftestinit 00:07:48.500 08:08:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.500 08:08:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.500 08:08:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.500 08:08:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.500 08:08:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.500 08:08:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.500 08:08:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.500 08:08:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.500 08:08:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:48.500 08:08:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:48.500 08:08:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:48.500 08:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:53.773 08:08:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:53.773 08:08:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:53.773 08:08:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:53.773 08:08:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:53.773 08:08:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:53.773 08:08:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:53.773 08:08:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:53.773 08:08:27 -- nvmf/common.sh@294 -- # net_devs=() 00:07:53.773 08:08:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:53.773 08:08:27 -- nvmf/common.sh@295 -- # e810=() 00:07:53.773 08:08:27 -- nvmf/common.sh@295 -- # local -ga e810 00:07:53.773 08:08:27 -- nvmf/common.sh@296 -- # x722=() 00:07:53.773 08:08:27 -- nvmf/common.sh@296 -- # local -ga x722 00:07:53.773 08:08:27 -- nvmf/common.sh@297 -- # mlx=() 00:07:53.773 08:08:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:53.773 08:08:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.773 08:08:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.774 08:08:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.774 08:08:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.774 08:08:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:53.774 08:08:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:53.774 08:08:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.774 08:08:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:53.774 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:53.774 08:08:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.774 08:08:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:53.774 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:53.774 08:08:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.774 08:08:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.774 08:08:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.774 08:08:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:53.774 Found net devices under 0000:af:00.0: cvl_0_0 00:07:53.774 08:08:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.774 08:08:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.774 08:08:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.774 08:08:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.774 08:08:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:53.774 Found net devices under 0000:af:00.1: cvl_0_1 00:07:53.774 08:08:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.774 08:08:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:53.774 08:08:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:53.774 08:08:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:53.774 08:08:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.774 08:08:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.774 08:08:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.774 08:08:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:53.774 08:08:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.774 08:08:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.774 08:08:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:53.774 08:08:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.774 08:08:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.774 08:08:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:53.774 08:08:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:53.774 08:08:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.774 08:08:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.774 08:08:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.774 08:08:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.774 08:08:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:53.774 08:08:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.033 08:08:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.033 08:08:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.033 08:08:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:54.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:07:54.033 00:07:54.033 --- 10.0.0.2 ping statistics --- 00:07:54.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.033 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:54.033 08:08:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:07:54.033 00:07:54.033 --- 10.0.0.1 ping statistics --- 00:07:54.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.033 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:07:54.033 08:08:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.033 08:08:27 -- nvmf/common.sh@410 -- # return 0 00:07:54.033 08:08:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:54.033 08:08:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.033 08:08:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:54.033 08:08:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:54.033 08:08:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.033 08:08:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:54.033 08:08:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:54.033 08:08:27 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:54.033 08:08:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:54.033 08:08:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:54.033 08:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 08:08:27 -- nvmf/common.sh@469 -- # nvmfpid=2117686 00:07:54.033 08:08:27 -- nvmf/common.sh@470 -- # waitforlisten 2117686 00:07:54.033 08:08:27 -- common/autotest_common.sh@817 -- # '[' -z 2117686 ']' 00:07:54.033 08:08:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.033 08:08:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.033 08:08:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.033 08:08:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.033 08:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.033 08:08:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.033 [2024-02-13 08:08:27.579681] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:54.033 [2024-02-13 08:08:27.579723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.033 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.033 [2024-02-13 08:08:27.642406] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.033 [2024-02-13 08:08:27.717802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.034 [2024-02-13 08:08:27.717907] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.034 [2024-02-13 08:08:27.717915] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.034 [2024-02-13 08:08:27.717922] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.034 [2024-02-13 08:08:27.717965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.034 [2024-02-13 08:08:27.717979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.034 [2024-02-13 08:08:27.718090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.034 [2024-02-13 08:08:27.718091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.970 08:08:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:54.970 08:08:28 -- common/autotest_common.sh@850 -- # return 0 00:07:54.970 08:08:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:54.970 08:08:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:54.970 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.971 08:08:28 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 [2024-02-13 08:08:28.422828] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 [2024-02-13 08:08:28.436186] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.971 08:08:28 -- target/referrals.sh@48 -- # jq length 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:54.971 08:08:28 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:54.971 08:08:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.971 08:08:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.971 08:08:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.971 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.971 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:54.971 08:08:28 -- target/referrals.sh@21 -- # sort 00:07:54.971 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:54.971 08:08:28 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:54.971 08:08:28 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:54.971 08:08:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.971 08:08:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.971 08:08:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.971 08:08:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.971 08:08:28 -- target/referrals.sh@26 -- # sort 00:07:55.230 08:08:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:55.230 08:08:28 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:55.230 08:08:28 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:55.230 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.230 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.230 08:08:28 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:55.230 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.230 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.230 08:08:28 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:55.230 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.230 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.230 08:08:28 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.230 08:08:28 -- target/referrals.sh@56 -- # jq length 00:07:55.230 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.230 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.230 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.230 08:08:28 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:55.230 08:08:28 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:55.230 08:08:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.230 08:08:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.230 08:08:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.230 08:08:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.230 08:08:28 -- target/referrals.sh@26 -- # sort 00:07:55.489 08:08:28 -- target/referrals.sh@26 -- # echo 00:07:55.489 08:08:28 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:55.489 08:08:28 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:55.489 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.489 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 08:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.489 08:08:28 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:55.489 08:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.489 08:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.489 08:08:29 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:55.489 08:08:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:55.489 08:08:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:55.489 08:08:29 -- target/referrals.sh@21 -- # sort 00:07:55.489 08:08:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:55.489 08:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.489 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.489 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.489 08:08:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:55.489 08:08:29 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.489 08:08:29 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:55.489 08:08:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:55.489 08:08:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:55.489 08:08:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.489 08:08:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:55.489 08:08:29 -- target/referrals.sh@26 -- # sort 00:07:55.748 08:08:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:55.748 08:08:29 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:55.748 08:08:29 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:55.748 08:08:29 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:55.748 08:08:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:55.748 08:08:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.748 08:08:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:55.748 08:08:29 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:55.748 08:08:29 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:55.748 08:08:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:55.748 08:08:29 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:55.748 08:08:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:55.748 08:08:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.008 08:08:29 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.008 08:08:29 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:56.008 08:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.008 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:56.008 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.008 08:08:29 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:56.008 08:08:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:56.008 08:08:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.008 08:08:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:56.008 08:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.008 08:08:29 -- target/referrals.sh@21 -- # sort 00:07:56.008 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:56.008 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.008 08:08:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:56.008 08:08:29 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.008 08:08:29 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:56.008 08:08:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.008 08:08:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.008 08:08:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.008 08:08:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.008 08:08:29 -- target/referrals.sh@26 -- # sort 00:07:56.008 08:08:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:56.008 08:08:29 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:56.008 08:08:29 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:56.008 08:08:29 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:56.008 08:08:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:56.008 08:08:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.008 08:08:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:56.268 08:08:29 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:56.268 08:08:29 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:56.268 08:08:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:56.268 08:08:29 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:56.268 08:08:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.268 08:08:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:56.268 08:08:29 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:56.268 08:08:29 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:56.268 08:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.268 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:56.268 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.268 08:08:29 -- target/referrals.sh@82 -- # jq length 00:07:56.268 08:08:29 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:56.268 08:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:56.268 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:56.268 08:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:56.268 08:08:29 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:56.268 08:08:29 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:56.268 08:08:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:56.268 08:08:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:56.528 08:08:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:56.528 08:08:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:56.528 08:08:29 -- target/referrals.sh@26 -- # sort 00:07:56.528 08:08:30 -- target/referrals.sh@26 -- # echo 00:07:56.528 08:08:30 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:56.528 08:08:30 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:56.528 08:08:30 -- target/referrals.sh@86 -- # nvmftestfini 00:07:56.528 08:08:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:56.528 08:08:30 -- nvmf/common.sh@116 -- # sync 00:07:56.528 08:08:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:56.528 08:08:30 -- nvmf/common.sh@119 -- # set +e 00:07:56.528 08:08:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:56.528 08:08:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:56.528 rmmod nvme_tcp 00:07:56.528 rmmod nvme_fabrics 00:07:56.528 rmmod nvme_keyring 00:07:56.528 08:08:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:56.528 08:08:30 -- nvmf/common.sh@123 -- # set -e 00:07:56.528 08:08:30 -- nvmf/common.sh@124 -- # return 0 00:07:56.528 08:08:30 -- nvmf/common.sh@477 -- # '[' -n 2117686 ']' 00:07:56.528 08:08:30 -- nvmf/common.sh@478 -- # killprocess 2117686 00:07:56.528 08:08:30 -- common/autotest_common.sh@924 -- # '[' -z 2117686 ']' 00:07:56.528 08:08:30 -- common/autotest_common.sh@928 -- # kill -0 2117686 00:07:56.528 08:08:30 -- common/autotest_common.sh@929 -- # uname 00:07:56.528 08:08:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:56.528 08:08:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2117686 00:07:56.528 08:08:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:56.528 08:08:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:56.528 08:08:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2117686' 00:07:56.528 killing process with pid 2117686 00:07:56.528 08:08:30 -- common/autotest_common.sh@943 -- # kill 2117686 00:07:56.528 08:08:30 -- common/autotest_common.sh@948 -- # wait 2117686 00:07:56.787 08:08:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:56.788 08:08:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:56.788 08:08:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:56.788 08:08:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.788 08:08:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:56.788 08:08:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.788 08:08:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.788 08:08:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.324 08:08:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:59.324 00:07:59.324 real 0m10.747s 00:07:59.324 user 0m12.914s 00:07:59.324 sys 0m4.922s 00:07:59.324 08:08:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.324 08:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:59.324 ************************************ 00:07:59.324 END TEST nvmf_referrals 00:07:59.324 ************************************ 00:07:59.325 08:08:32 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.325 08:08:32 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:59.325 08:08:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:59.325 08:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:59.325 ************************************ 00:07:59.325 START TEST nvmf_connect_disconnect 00:07:59.325 ************************************ 00:07:59.325 08:08:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:59.325 * Looking for test storage... 00:07:59.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.325 08:08:32 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.325 08:08:32 -- nvmf/common.sh@7 -- # uname -s 00:07:59.325 08:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.325 08:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.325 08:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.325 08:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.325 08:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.325 08:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.325 08:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.325 08:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.325 08:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.325 08:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.325 08:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:59.325 08:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:59.325 08:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.325 08:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.325 08:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.325 08:08:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.325 08:08:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.325 08:08:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.325 08:08:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.325 08:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.325 08:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.325 08:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.325 08:08:32 -- paths/export.sh@5 -- # export PATH 00:07:59.325 08:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.325 08:08:32 -- nvmf/common.sh@46 -- # : 0 00:07:59.325 08:08:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.325 08:08:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.325 08:08:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.325 08:08:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.325 08:08:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.325 08:08:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.325 08:08:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.325 08:08:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.325 08:08:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.325 08:08:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.325 08:08:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:59.325 08:08:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:59.325 08:08:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.325 08:08:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:59.325 08:08:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:59.325 08:08:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:59.325 08:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.325 08:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.325 08:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.325 08:08:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:59.325 08:08:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:59.325 08:08:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:59.325 08:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:04.667 08:08:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.667 08:08:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:04.667 08:08:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:04.667 08:08:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:04.667 08:08:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:04.667 08:08:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:04.667 08:08:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:04.667 08:08:38 -- nvmf/common.sh@294 -- # net_devs=() 00:08:04.667 08:08:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:04.667 08:08:38 -- nvmf/common.sh@295 -- # e810=() 00:08:04.667 08:08:38 -- nvmf/common.sh@295 -- # local -ga e810 00:08:04.667 08:08:38 -- nvmf/common.sh@296 -- # x722=() 00:08:04.667 08:08:38 -- nvmf/common.sh@296 -- # local -ga x722 00:08:04.667 08:08:38 -- nvmf/common.sh@297 -- # mlx=() 00:08:04.667 08:08:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:04.667 08:08:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.667 08:08:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:04.667 08:08:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:04.667 08:08:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:04.667 08:08:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.667 08:08:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:04.667 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:04.667 08:08:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.667 08:08:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.667 08:08:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:04.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:04.668 08:08:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:04.668 08:08:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.668 08:08:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.668 08:08:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.668 08:08:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.668 08:08:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:04.668 Found net devices under 0000:af:00.0: cvl_0_0 00:08:04.668 08:08:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.668 08:08:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.668 08:08:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.668 08:08:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.668 08:08:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.668 08:08:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:04.668 Found net devices under 0000:af:00.1: cvl_0_1 00:08:04.668 08:08:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.668 08:08:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:04.668 08:08:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:04.668 08:08:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:04.668 08:08:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:04.668 08:08:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.668 08:08:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.668 08:08:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.668 08:08:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:04.668 08:08:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.668 08:08:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.668 08:08:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:04.668 08:08:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.668 08:08:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.668 08:08:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:04.668 08:08:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:04.668 08:08:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.668 08:08:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.668 08:08:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.668 08:08:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.668 08:08:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:04.668 08:08:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.927 08:08:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.927 08:08:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.927 08:08:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:04.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:08:04.927 00:08:04.927 --- 10.0.0.2 ping statistics --- 00:08:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.927 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:04.927 08:08:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:08:04.927 00:08:04.927 --- 10.0.0.1 ping statistics --- 00:08:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.927 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:04.927 08:08:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.927 08:08:38 -- nvmf/common.sh@410 -- # return 0 00:08:04.927 08:08:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.927 08:08:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.927 08:08:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:04.927 08:08:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:04.927 08:08:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.927 08:08:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:04.927 08:08:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:04.927 08:08:38 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:04.927 08:08:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.927 08:08:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:04.927 08:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:04.927 08:08:38 -- nvmf/common.sh@469 -- # nvmfpid=2122044 00:08:04.927 08:08:38 -- nvmf/common.sh@470 -- # waitforlisten 2122044 00:08:04.927 08:08:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.927 08:08:38 -- common/autotest_common.sh@817 -- # '[' -z 2122044 ']' 00:08:04.927 08:08:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.927 08:08:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.927 08:08:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.927 08:08:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.927 08:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:04.927 [2024-02-13 08:08:38.492108] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:04.927 [2024-02-13 08:08:38.492152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.927 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.927 [2024-02-13 08:08:38.554792] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.187 [2024-02-13 08:08:38.633227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.187 [2024-02-13 08:08:38.633343] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.187 [2024-02-13 08:08:38.633350] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.187 [2024-02-13 08:08:38.633356] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.187 [2024-02-13 08:08:38.633388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.187 [2024-02-13 08:08:38.633496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.187 [2024-02-13 08:08:38.633583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.187 [2024-02-13 08:08:38.633584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.756 08:08:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.756 08:08:39 -- common/autotest_common.sh@850 -- # return 0 00:08:05.756 08:08:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:05.756 08:08:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 08:08:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:05.756 08:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 [2024-02-13 08:08:39.334922] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.756 08:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:05.756 08:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 08:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.756 08:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 08:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.756 08:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 08:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.756 08:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.756 08:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.756 [2024-02-13 08:08:39.386254] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.756 08:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:05.756 08:08:39 -- target/connect_disconnect.sh@34 -- # set +x 00:08:08.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.318 08:12:31 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:58.318 08:12:31 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:58.318 08:12:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:58.318 08:12:31 -- nvmf/common.sh@116 -- # sync 00:11:58.318 08:12:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:58.318 08:12:31 -- nvmf/common.sh@119 -- # set +e 00:11:58.318 08:12:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:58.318 08:12:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:58.318 rmmod nvme_tcp 00:11:58.318 rmmod nvme_fabrics 00:11:58.318 rmmod nvme_keyring 00:11:58.318 08:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:58.318 08:12:31 -- nvmf/common.sh@123 -- # set -e 00:11:58.318 08:12:31 -- nvmf/common.sh@124 -- # return 0 00:11:58.318 08:12:31 -- nvmf/common.sh@477 -- # '[' -n 2122044 ']' 00:11:58.318 08:12:31 -- nvmf/common.sh@478 -- # killprocess 2122044 00:11:58.318 08:12:31 -- common/autotest_common.sh@924 -- # '[' -z 2122044 ']' 00:11:58.318 08:12:31 -- common/autotest_common.sh@928 -- # kill -0 2122044 00:11:58.318 08:12:31 -- common/autotest_common.sh@929 -- # uname 00:11:58.318 08:12:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:58.318 08:12:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2122044 00:11:58.318 08:12:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:58.318 08:12:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:58.318 08:12:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2122044' 00:11:58.318 killing process with pid 2122044 00:11:58.318 08:12:31 -- common/autotest_common.sh@943 -- # kill 2122044 00:11:58.318 08:12:31 -- common/autotest_common.sh@948 -- # wait 2122044 00:11:58.577 08:12:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:58.577 08:12:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:58.577 08:12:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:58.577 08:12:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.577 08:12:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:58.577 08:12:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.577 08:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.577 08:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.113 08:12:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:01.113 00:12:01.113 real 4m1.788s 00:12:01.113 user 15m27.340s 00:12:01.113 sys 0m20.544s 00:12:01.113 08:12:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.113 08:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:01.113 ************************************ 00:12:01.113 END TEST nvmf_connect_disconnect 00:12:01.113 ************************************ 00:12:01.113 08:12:34 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:01.113 08:12:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:01.113 08:12:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:01.113 08:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:01.113 ************************************ 00:12:01.113 START TEST nvmf_multitarget 00:12:01.113 ************************************ 00:12:01.113 08:12:34 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:01.113 * Looking for test storage... 00:12:01.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.113 08:12:34 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.113 08:12:34 -- nvmf/common.sh@7 -- # uname -s 00:12:01.113 08:12:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.113 08:12:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.113 08:12:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.113 08:12:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.113 08:12:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.113 08:12:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.113 08:12:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.113 08:12:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.113 08:12:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.113 08:12:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.113 08:12:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:01.113 08:12:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:01.113 08:12:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.113 08:12:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.113 08:12:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.113 08:12:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.113 08:12:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.113 08:12:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.113 08:12:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.114 08:12:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.114 08:12:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.114 08:12:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.114 08:12:34 -- paths/export.sh@5 -- # export PATH 00:12:01.114 08:12:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.114 08:12:34 -- nvmf/common.sh@46 -- # : 0 00:12:01.114 08:12:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:01.114 08:12:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:01.114 08:12:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:01.114 08:12:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.114 08:12:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.114 08:12:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:01.114 08:12:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:01.114 08:12:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:01.114 08:12:34 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:01.114 08:12:34 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:01.114 08:12:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:01.114 08:12:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.114 08:12:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:01.114 08:12:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:01.114 08:12:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:01.114 08:12:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.114 08:12:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.114 08:12:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.114 08:12:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:01.114 08:12:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:01.114 08:12:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:01.114 08:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:07.675 08:12:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:07.675 08:12:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:07.675 08:12:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:07.675 08:12:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:07.675 08:12:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:07.675 08:12:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:07.675 08:12:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:07.675 08:12:40 -- nvmf/common.sh@294 -- # net_devs=() 00:12:07.675 08:12:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:07.675 08:12:40 -- nvmf/common.sh@295 -- # e810=() 00:12:07.675 08:12:40 -- nvmf/common.sh@295 -- # local -ga e810 00:12:07.675 08:12:40 -- nvmf/common.sh@296 -- # x722=() 00:12:07.675 08:12:40 -- nvmf/common.sh@296 -- # local -ga x722 00:12:07.675 08:12:40 -- nvmf/common.sh@297 -- # mlx=() 00:12:07.675 08:12:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:07.675 08:12:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.675 08:12:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:07.675 08:12:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:07.675 08:12:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:07.675 08:12:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:07.675 08:12:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.675 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.675 08:12:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:07.675 08:12:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.675 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.675 08:12:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:07.675 08:12:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:07.675 08:12:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:07.675 08:12:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.675 08:12:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:07.675 08:12:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.675 08:12:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.675 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.675 08:12:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.675 08:12:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:07.675 08:12:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.675 08:12:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:07.675 08:12:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.676 08:12:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.676 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.676 08:12:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.676 08:12:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:07.676 08:12:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:07.676 08:12:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:07.676 08:12:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:07.676 08:12:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:07.676 08:12:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.676 08:12:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.676 08:12:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.676 08:12:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:07.676 08:12:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.676 08:12:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.676 08:12:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:07.676 08:12:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.676 08:12:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.676 08:12:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:07.676 08:12:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:07.676 08:12:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.676 08:12:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.676 08:12:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.676 08:12:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.676 08:12:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:07.676 08:12:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.676 08:12:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.676 08:12:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.676 08:12:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:07.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:12:07.676 00:12:07.676 --- 10.0.0.2 ping statistics --- 00:12:07.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.676 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:07.676 08:12:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:07.676 00:12:07.676 --- 10.0.0.1 ping statistics --- 00:12:07.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.676 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:07.676 08:12:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.676 08:12:40 -- nvmf/common.sh@410 -- # return 0 00:12:07.676 08:12:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:07.676 08:12:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.676 08:12:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:07.676 08:12:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:07.676 08:12:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.676 08:12:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:07.676 08:12:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:07.676 08:12:40 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:07.676 08:12:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:07.676 08:12:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:07.676 08:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:07.676 08:12:40 -- nvmf/common.sh@469 -- # nvmfpid=2167186 00:12:07.676 08:12:40 -- nvmf/common.sh@470 -- # waitforlisten 2167186 00:12:07.676 08:12:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.676 08:12:40 -- common/autotest_common.sh@817 -- # '[' -z 2167186 ']' 00:12:07.676 08:12:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.676 08:12:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.676 08:12:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.676 08:12:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.676 08:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:07.676 [2024-02-13 08:12:40.701922] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:07.676 [2024-02-13 08:12:40.701964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.676 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.676 [2024-02-13 08:12:40.764640] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.676 [2024-02-13 08:12:40.841278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:07.676 [2024-02-13 08:12:40.841401] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.676 [2024-02-13 08:12:40.841409] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.676 [2024-02-13 08:12:40.841415] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.676 [2024-02-13 08:12:40.841455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.676 [2024-02-13 08:12:40.841557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.676 [2024-02-13 08:12:40.841644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.676 [2024-02-13 08:12:40.841645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.936 08:12:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.936 08:12:41 -- common/autotest_common.sh@850 -- # return 0 00:12:07.936 08:12:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:07.936 08:12:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:07.936 08:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:07.936 08:12:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.936 08:12:41 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:07.936 08:12:41 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.936 08:12:41 -- target/multitarget.sh@21 -- # jq length 00:12:08.195 08:12:41 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:08.195 08:12:41 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:08.195 "nvmf_tgt_1" 00:12:08.195 08:12:41 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:08.195 "nvmf_tgt_2" 00:12:08.195 08:12:41 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.195 08:12:41 -- target/multitarget.sh@28 -- # jq length 00:12:08.455 08:12:41 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:08.455 08:12:41 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:08.455 true 00:12:08.455 08:12:42 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:08.455 true 00:12:08.715 08:12:42 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:08.715 08:12:42 -- target/multitarget.sh@35 -- # jq length 00:12:08.715 08:12:42 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:08.715 08:12:42 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:08.715 08:12:42 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:08.715 08:12:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:08.715 08:12:42 -- nvmf/common.sh@116 -- # sync 00:12:08.715 08:12:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:08.715 08:12:42 -- nvmf/common.sh@119 -- # set +e 00:12:08.715 08:12:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:08.715 08:12:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:08.715 rmmod nvme_tcp 00:12:08.715 rmmod nvme_fabrics 00:12:08.715 rmmod nvme_keyring 00:12:08.715 08:12:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:08.715 08:12:42 -- nvmf/common.sh@123 -- # set -e 00:12:08.715 08:12:42 -- nvmf/common.sh@124 -- # return 0 00:12:08.715 08:12:42 -- nvmf/common.sh@477 -- # '[' -n 2167186 ']' 00:12:08.715 08:12:42 -- nvmf/common.sh@478 -- # killprocess 2167186 00:12:08.715 08:12:42 -- common/autotest_common.sh@924 -- # '[' -z 2167186 ']' 00:12:08.715 08:12:42 -- common/autotest_common.sh@928 -- # kill -0 2167186 00:12:08.715 08:12:42 -- common/autotest_common.sh@929 -- # uname 00:12:08.715 08:12:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:08.715 08:12:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2167186 00:12:08.715 08:12:42 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:08.715 08:12:42 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:08.715 08:12:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2167186' 00:12:08.715 killing process with pid 2167186 00:12:08.715 08:12:42 -- common/autotest_common.sh@943 -- # kill 2167186 00:12:08.715 08:12:42 -- common/autotest_common.sh@948 -- # wait 2167186 00:12:08.989 08:12:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:08.989 08:12:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:08.989 08:12:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:08.989 08:12:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.989 08:12:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:08.989 08:12:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.989 08:12:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.989 08:12:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.957 08:12:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:10.957 00:12:10.957 real 0m10.341s 00:12:10.957 user 0m9.287s 00:12:10.957 sys 0m5.125s 00:12:10.957 08:12:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.957 08:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:10.957 ************************************ 00:12:10.957 END TEST nvmf_multitarget 00:12:10.957 ************************************ 00:12:11.216 08:12:44 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:11.216 08:12:44 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:11.216 08:12:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:11.216 08:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:11.216 ************************************ 00:12:11.216 START TEST nvmf_rpc 00:12:11.216 ************************************ 00:12:11.216 08:12:44 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:11.216 * Looking for test storage... 00:12:11.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.216 08:12:44 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.216 08:12:44 -- nvmf/common.sh@7 -- # uname -s 00:12:11.216 08:12:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.216 08:12:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.216 08:12:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.216 08:12:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.216 08:12:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.216 08:12:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.216 08:12:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.216 08:12:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.216 08:12:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.216 08:12:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.216 08:12:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:11.216 08:12:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:11.216 08:12:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.216 08:12:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.216 08:12:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.216 08:12:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.216 08:12:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.216 08:12:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.216 08:12:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.216 08:12:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.216 08:12:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.217 08:12:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.217 08:12:44 -- paths/export.sh@5 -- # export PATH 00:12:11.217 08:12:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.217 08:12:44 -- nvmf/common.sh@46 -- # : 0 00:12:11.217 08:12:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:11.217 08:12:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:11.217 08:12:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:11.217 08:12:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.217 08:12:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.217 08:12:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:11.217 08:12:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:11.217 08:12:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:11.217 08:12:44 -- target/rpc.sh@11 -- # loops=5 00:12:11.217 08:12:44 -- target/rpc.sh@23 -- # nvmftestinit 00:12:11.217 08:12:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:11.217 08:12:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.217 08:12:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:11.217 08:12:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:11.217 08:12:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:11.217 08:12:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.217 08:12:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.217 08:12:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.217 08:12:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:11.217 08:12:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:11.217 08:12:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:11.217 08:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:17.789 08:12:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:17.789 08:12:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:17.789 08:12:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:17.789 08:12:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:17.789 08:12:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:17.789 08:12:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:17.789 08:12:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:17.789 08:12:50 -- nvmf/common.sh@294 -- # net_devs=() 00:12:17.789 08:12:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:17.789 08:12:50 -- nvmf/common.sh@295 -- # e810=() 00:12:17.789 08:12:50 -- nvmf/common.sh@295 -- # local -ga e810 00:12:17.789 08:12:50 -- nvmf/common.sh@296 -- # x722=() 00:12:17.789 08:12:50 -- nvmf/common.sh@296 -- # local -ga x722 00:12:17.789 08:12:50 -- nvmf/common.sh@297 -- # mlx=() 00:12:17.789 08:12:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:17.789 08:12:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.789 08:12:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:17.789 08:12:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:17.789 08:12:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:17.789 08:12:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:17.790 08:12:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:17.790 08:12:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:17.790 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:17.790 08:12:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:17.790 08:12:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:17.790 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:17.790 08:12:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:17.790 08:12:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.790 08:12:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.790 08:12:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:17.790 Found net devices under 0000:af:00.0: cvl_0_0 00:12:17.790 08:12:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.790 08:12:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:17.790 08:12:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.790 08:12:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.790 08:12:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:17.790 Found net devices under 0000:af:00.1: cvl_0_1 00:12:17.790 08:12:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.790 08:12:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:17.790 08:12:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:17.790 08:12:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.790 08:12:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.790 08:12:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.790 08:12:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:17.790 08:12:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.790 08:12:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.790 08:12:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:17.790 08:12:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.790 08:12:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.790 08:12:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:17.790 08:12:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:17.790 08:12:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.790 08:12:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.790 08:12:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.790 08:12:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.790 08:12:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:17.790 08:12:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.790 08:12:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.790 08:12:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.790 08:12:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:17.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:12:17.790 00:12:17.790 --- 10.0.0.2 ping statistics --- 00:12:17.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.790 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:17.790 08:12:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:17.790 00:12:17.790 --- 10.0.0.1 ping statistics --- 00:12:17.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.790 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:17.790 08:12:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.790 08:12:50 -- nvmf/common.sh@410 -- # return 0 00:12:17.790 08:12:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:17.790 08:12:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.790 08:12:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:17.790 08:12:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.790 08:12:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:17.790 08:12:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:17.790 08:12:50 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:17.790 08:12:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:17.790 08:12:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:17.790 08:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:17.790 08:12:50 -- nvmf/common.sh@469 -- # nvmfpid=2171244 00:12:17.790 08:12:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.790 08:12:50 -- nvmf/common.sh@470 -- # waitforlisten 2171244 00:12:17.790 08:12:50 -- common/autotest_common.sh@817 -- # '[' -z 2171244 ']' 00:12:17.790 08:12:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.790 08:12:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.790 08:12:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.790 08:12:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.790 08:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:17.790 [2024-02-13 08:12:50.747500] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:17.790 [2024-02-13 08:12:50.747539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.790 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.790 [2024-02-13 08:12:50.810326] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.790 [2024-02-13 08:12:50.879108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.790 [2024-02-13 08:12:50.879239] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.790 [2024-02-13 08:12:50.879248] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.790 [2024-02-13 08:12:50.879254] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.790 [2024-02-13 08:12:50.879303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.790 [2024-02-13 08:12:50.879406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.790 [2024-02-13 08:12:50.879477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.790 [2024-02-13 08:12:50.879478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.050 08:12:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.050 08:12:51 -- common/autotest_common.sh@850 -- # return 0 00:12:18.050 08:12:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:18.050 08:12:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:18.050 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 08:12:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.050 08:12:51 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:18.050 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.050 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.050 08:12:51 -- target/rpc.sh@26 -- # stats='{ 00:12:18.050 "tick_rate": 2100000000, 00:12:18.050 "poll_groups": [ 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_0", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_1", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_2", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_3", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [] 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 }' 00:12:18.050 08:12:51 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:18.050 08:12:51 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:18.050 08:12:51 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:18.050 08:12:51 -- target/rpc.sh@15 -- # wc -l 00:12:18.050 08:12:51 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:18.050 08:12:51 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:18.050 08:12:51 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:18.050 08:12:51 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.050 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.050 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 [2024-02-13 08:12:51.691212] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.050 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.050 08:12:51 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:18.050 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.050 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.050 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.050 08:12:51 -- target/rpc.sh@33 -- # stats='{ 00:12:18.050 "tick_rate": 2100000000, 00:12:18.050 "poll_groups": [ 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_0", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [ 00:12:18.050 { 00:12:18.050 "trtype": "TCP" 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_1", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [ 00:12:18.050 { 00:12:18.050 "trtype": "TCP" 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_2", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [ 00:12:18.050 { 00:12:18.050 "trtype": "TCP" 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 }, 00:12:18.050 { 00:12:18.050 "name": "nvmf_tgt_poll_group_3", 00:12:18.050 "admin_qpairs": 0, 00:12:18.050 "io_qpairs": 0, 00:12:18.050 "current_admin_qpairs": 0, 00:12:18.050 "current_io_qpairs": 0, 00:12:18.050 "pending_bdev_io": 0, 00:12:18.050 "completed_nvme_io": 0, 00:12:18.050 "transports": [ 00:12:18.050 { 00:12:18.050 "trtype": "TCP" 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 } 00:12:18.050 ] 00:12:18.050 }' 00:12:18.050 08:12:51 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:18.050 08:12:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:18.050 08:12:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:18.050 08:12:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.310 08:12:51 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:18.310 08:12:51 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:18.310 08:12:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:18.310 08:12:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:18.310 08:12:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.310 08:12:51 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:18.310 08:12:51 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:18.310 08:12:51 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:18.311 08:12:51 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:18.311 08:12:51 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 Malloc1 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 [2024-02-13 08:12:51.846977] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.311 08:12:51 -- common/autotest_common.sh@638 -- # local es=0 00:12:18.311 08:12:51 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.311 08:12:51 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:18.311 08:12:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:18.311 08:12:51 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:18.311 08:12:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:18.311 08:12:51 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:18.311 08:12:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:18.311 08:12:51 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:18.311 08:12:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.311 08:12:51 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:18.311 [2024-02-13 08:12:51.875538] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:18.311 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.311 could not add new controller: failed to write to nvme-fabrics device 00:12:18.311 08:12:51 -- common/autotest_common.sh@641 -- # es=1 00:12:18.311 08:12:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:18.311 08:12:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:18.311 08:12:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:18.311 08:12:51 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:18.311 08:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.311 08:12:51 -- common/autotest_common.sh@10 -- # set +x 00:12:18.311 08:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.311 08:12:51 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.690 08:12:53 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.690 08:12:53 -- common/autotest_common.sh@1175 -- # local i=0 00:12:19.690 08:12:53 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.690 08:12:53 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:19.690 08:12:53 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:21.593 08:12:55 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:21.593 08:12:55 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:21.593 08:12:55 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.593 08:12:55 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:21.593 08:12:55 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.593 08:12:55 -- common/autotest_common.sh@1185 -- # return 0 00:12:21.593 08:12:55 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.852 08:12:55 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.852 08:12:55 -- common/autotest_common.sh@1196 -- # local i=0 00:12:21.852 08:12:55 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:21.852 08:12:55 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.852 08:12:55 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:21.852 08:12:55 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.852 08:12:55 -- common/autotest_common.sh@1208 -- # return 0 00:12:21.852 08:12:55 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:21.852 08:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.852 08:12:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.852 08:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.852 08:12:55 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.852 08:12:55 -- common/autotest_common.sh@638 -- # local es=0 00:12:21.852 08:12:55 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.852 08:12:55 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:21.852 08:12:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:21.852 08:12:55 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:21.852 08:12:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:21.852 08:12:55 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:21.852 08:12:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:21.852 08:12:55 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:21.852 08:12:55 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:21.852 08:12:55 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.852 [2024-02-13 08:12:55.389932] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:21.852 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:21.852 could not add new controller: failed to write to nvme-fabrics device 00:12:21.852 08:12:55 -- common/autotest_common.sh@641 -- # es=1 00:12:21.852 08:12:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:21.852 08:12:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:21.852 08:12:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:21.852 08:12:55 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:21.852 08:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:21.852 08:12:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.852 08:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:21.852 08:12:55 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.228 08:12:56 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.228 08:12:56 -- common/autotest_common.sh@1175 -- # local i=0 00:12:23.228 08:12:56 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.228 08:12:56 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:23.228 08:12:56 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:25.134 08:12:58 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:25.134 08:12:58 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:25.134 08:12:58 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.134 08:12:58 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:25.134 08:12:58 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.134 08:12:58 -- common/autotest_common.sh@1185 -- # return 0 00:12:25.134 08:12:58 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.134 08:12:58 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.134 08:12:58 -- common/autotest_common.sh@1196 -- # local i=0 00:12:25.134 08:12:58 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:25.134 08:12:58 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.134 08:12:58 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:25.134 08:12:58 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.134 08:12:58 -- common/autotest_common.sh@1208 -- # return 0 00:12:25.134 08:12:58 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.134 08:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.134 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.134 08:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.134 08:12:58 -- target/rpc.sh@81 -- # seq 1 5 00:12:25.134 08:12:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.134 08:12:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.134 08:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.134 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.134 08:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.135 08:12:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.135 08:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.135 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 [2024-02-13 08:12:58.717076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.135 08:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.135 08:12:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.135 08:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.135 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 08:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.135 08:12:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.135 08:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.135 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.135 08:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.135 08:12:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.512 08:12:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.512 08:12:59 -- common/autotest_common.sh@1175 -- # local i=0 00:12:26.512 08:12:59 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.512 08:12:59 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:26.512 08:12:59 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:28.417 08:13:01 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:28.417 08:13:01 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:28.417 08:13:01 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.417 08:13:01 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:28.417 08:13:01 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.417 08:13:01 -- common/autotest_common.sh@1185 -- # return 0 00:12:28.417 08:13:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.417 08:13:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.417 08:13:02 -- common/autotest_common.sh@1196 -- # local i=0 00:12:28.417 08:13:02 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:28.417 08:13:02 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.417 08:13:02 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:28.417 08:13:02 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.417 08:13:02 -- common/autotest_common.sh@1208 -- # return 0 00:12:28.417 08:13:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.417 08:13:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 [2024-02-13 08:13:02.071800] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.417 08:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.417 08:13:02 -- common/autotest_common.sh@10 -- # set +x 00:12:28.417 08:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.417 08:13:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.794 08:13:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.794 08:13:03 -- common/autotest_common.sh@1175 -- # local i=0 00:12:29.794 08:13:03 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.794 08:13:03 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:29.794 08:13:03 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:31.717 08:13:05 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:31.717 08:13:05 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:31.717 08:13:05 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.717 08:13:05 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:31.717 08:13:05 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.717 08:13:05 -- common/autotest_common.sh@1185 -- # return 0 00:12:31.717 08:13:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.717 08:13:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.717 08:13:05 -- common/autotest_common.sh@1196 -- # local i=0 00:12:31.717 08:13:05 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:31.717 08:13:05 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.717 08:13:05 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:31.717 08:13:05 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.717 08:13:05 -- common/autotest_common.sh@1208 -- # return 0 00:12:31.717 08:13:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.717 08:13:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 [2024-02-13 08:13:05.380266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.717 08:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.717 08:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 08:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.717 08:13:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.096 08:13:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.096 08:13:06 -- common/autotest_common.sh@1175 -- # local i=0 00:12:33.096 08:13:06 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.096 08:13:06 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:33.096 08:13:06 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:35.003 08:13:08 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:35.003 08:13:08 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:35.003 08:13:08 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.003 08:13:08 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:35.003 08:13:08 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.003 08:13:08 -- common/autotest_common.sh@1185 -- # return 0 00:12:35.003 08:13:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.003 08:13:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.003 08:13:08 -- common/autotest_common.sh@1196 -- # local i=0 00:12:35.003 08:13:08 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:35.003 08:13:08 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.003 08:13:08 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:35.003 08:13:08 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.003 08:13:08 -- common/autotest_common.sh@1208 -- # return 0 00:12:35.003 08:13:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.003 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.003 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.003 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.003 08:13:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.003 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.003 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.003 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.003 08:13:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.003 08:13:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.003 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.003 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.003 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.003 08:13:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.003 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.003 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.271 [2024-02-13 08:13:08.692717] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.271 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.271 08:13:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.271 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.271 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.271 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.271 08:13:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.271 08:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.271 08:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.271 08:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.271 08:13:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.208 08:13:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.208 08:13:09 -- common/autotest_common.sh@1175 -- # local i=0 00:12:36.208 08:13:09 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.208 08:13:09 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:36.208 08:13:09 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:38.745 08:13:11 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:38.745 08:13:11 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:38.745 08:13:11 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.745 08:13:11 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:38.745 08:13:11 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.745 08:13:11 -- common/autotest_common.sh@1185 -- # return 0 00:12:38.745 08:13:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.745 08:13:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.745 08:13:11 -- common/autotest_common.sh@1196 -- # local i=0 00:12:38.745 08:13:11 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:38.745 08:13:11 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.745 08:13:11 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:38.745 08:13:11 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.745 08:13:11 -- common/autotest_common.sh@1208 -- # return 0 00:12:38.745 08:13:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.745 08:13:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 [2024-02-13 08:13:11.966474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.745 08:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.745 08:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.745 08:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.745 08:13:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.684 08:13:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.684 08:13:13 -- common/autotest_common.sh@1175 -- # local i=0 00:12:39.684 08:13:13 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.684 08:13:13 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:12:39.684 08:13:13 -- common/autotest_common.sh@1182 -- # sleep 2 00:12:41.591 08:13:15 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:12:41.591 08:13:15 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:12:41.591 08:13:15 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.591 08:13:15 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:12:41.591 08:13:15 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.591 08:13:15 -- common/autotest_common.sh@1185 -- # return 0 00:12:41.591 08:13:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.850 08:13:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@1196 -- # local i=0 00:12:41.850 08:13:15 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:12:41.850 08:13:15 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:41.850 08:13:15 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@1208 -- # return 0 00:12:41.850 08:13:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@99 -- # seq 1 5 00:12:41.850 08:13:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.850 08:13:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 [2024-02-13 08:13:15.373947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.850 08:13:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 [2024-02-13 08:13:15.422061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.850 08:13:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 [2024-02-13 08:13:15.470190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:41.850 08:13:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.850 08:13:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.850 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.850 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.850 [2024-02-13 08:13:15.522364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.850 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.851 08:13:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.851 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.851 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:41.851 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.851 08:13:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.851 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.851 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.110 08:13:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 [2024-02-13 08:13:15.570527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:42.110 08:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.110 08:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 08:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.110 08:13:15 -- target/rpc.sh@110 -- # stats='{ 00:12:42.110 "tick_rate": 2100000000, 00:12:42.110 "poll_groups": [ 00:12:42.110 { 00:12:42.110 "name": "nvmf_tgt_poll_group_0", 00:12:42.110 "admin_qpairs": 2, 00:12:42.110 "io_qpairs": 168, 00:12:42.110 "current_admin_qpairs": 0, 00:12:42.110 "current_io_qpairs": 0, 00:12:42.110 "pending_bdev_io": 0, 00:12:42.110 "completed_nvme_io": 268, 00:12:42.110 "transports": [ 00:12:42.110 { 00:12:42.110 "trtype": "TCP" 00:12:42.110 } 00:12:42.110 ] 00:12:42.110 }, 00:12:42.110 { 00:12:42.110 "name": "nvmf_tgt_poll_group_1", 00:12:42.110 "admin_qpairs": 2, 00:12:42.110 "io_qpairs": 168, 00:12:42.110 "current_admin_qpairs": 0, 00:12:42.110 "current_io_qpairs": 0, 00:12:42.110 "pending_bdev_io": 0, 00:12:42.110 "completed_nvme_io": 268, 00:12:42.110 "transports": [ 00:12:42.110 { 00:12:42.110 "trtype": "TCP" 00:12:42.111 } 00:12:42.111 ] 00:12:42.111 }, 00:12:42.111 { 00:12:42.111 "name": "nvmf_tgt_poll_group_2", 00:12:42.111 "admin_qpairs": 1, 00:12:42.111 "io_qpairs": 168, 00:12:42.111 "current_admin_qpairs": 0, 00:12:42.111 "current_io_qpairs": 0, 00:12:42.111 "pending_bdev_io": 0, 00:12:42.111 "completed_nvme_io": 218, 00:12:42.111 "transports": [ 00:12:42.111 { 00:12:42.111 "trtype": "TCP" 00:12:42.111 } 00:12:42.111 ] 00:12:42.111 }, 00:12:42.111 { 00:12:42.111 "name": "nvmf_tgt_poll_group_3", 00:12:42.111 "admin_qpairs": 2, 00:12:42.111 "io_qpairs": 168, 00:12:42.111 "current_admin_qpairs": 0, 00:12:42.111 "current_io_qpairs": 0, 00:12:42.111 "pending_bdev_io": 0, 00:12:42.111 "completed_nvme_io": 268, 00:12:42.111 "transports": [ 00:12:42.111 { 00:12:42.111 "trtype": "TCP" 00:12:42.111 } 00:12:42.111 ] 00:12:42.111 } 00:12:42.111 ] 00:12:42.111 }' 00:12:42.111 08:13:15 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.111 08:13:15 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:42.111 08:13:15 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:42.111 08:13:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.111 08:13:15 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:42.111 08:13:15 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:42.111 08:13:15 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:42.111 08:13:15 -- target/rpc.sh@123 -- # nvmftestfini 00:12:42.111 08:13:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:42.111 08:13:15 -- nvmf/common.sh@116 -- # sync 00:12:42.111 08:13:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:42.111 08:13:15 -- nvmf/common.sh@119 -- # set +e 00:12:42.111 08:13:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:42.111 08:13:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:42.111 rmmod nvme_tcp 00:12:42.111 rmmod nvme_fabrics 00:12:42.111 rmmod nvme_keyring 00:12:42.111 08:13:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:42.111 08:13:15 -- nvmf/common.sh@123 -- # set -e 00:12:42.111 08:13:15 -- nvmf/common.sh@124 -- # return 0 00:12:42.111 08:13:15 -- nvmf/common.sh@477 -- # '[' -n 2171244 ']' 00:12:42.111 08:13:15 -- nvmf/common.sh@478 -- # killprocess 2171244 00:12:42.111 08:13:15 -- common/autotest_common.sh@924 -- # '[' -z 2171244 ']' 00:12:42.111 08:13:15 -- common/autotest_common.sh@928 -- # kill -0 2171244 00:12:42.111 08:13:15 -- common/autotest_common.sh@929 -- # uname 00:12:42.111 08:13:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:42.111 08:13:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2171244 00:12:42.370 08:13:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:42.370 08:13:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:42.370 08:13:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2171244' 00:12:42.370 killing process with pid 2171244 00:12:42.370 08:13:15 -- common/autotest_common.sh@943 -- # kill 2171244 00:12:42.370 08:13:15 -- common/autotest_common.sh@948 -- # wait 2171244 00:12:42.370 08:13:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:42.370 08:13:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:42.370 08:13:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:42.370 08:13:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.370 08:13:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:42.370 08:13:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.370 08:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.370 08:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.904 08:13:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:44.904 00:12:44.904 real 0m33.456s 00:12:44.904 user 1m41.882s 00:12:44.904 sys 0m6.199s 00:12:44.904 08:13:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.904 08:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 ************************************ 00:12:44.904 END TEST nvmf_rpc 00:12:44.904 ************************************ 00:12:44.904 08:13:18 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.904 08:13:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:44.904 08:13:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:44.904 08:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 ************************************ 00:12:44.904 START TEST nvmf_invalid 00:12:44.904 ************************************ 00:12:44.904 08:13:18 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:44.904 * Looking for test storage... 00:12:44.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.904 08:13:18 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.904 08:13:18 -- nvmf/common.sh@7 -- # uname -s 00:12:44.904 08:13:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.904 08:13:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.904 08:13:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.904 08:13:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.904 08:13:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.904 08:13:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.904 08:13:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.904 08:13:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.904 08:13:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.904 08:13:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.904 08:13:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:44.904 08:13:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:44.904 08:13:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.904 08:13:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.904 08:13:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.904 08:13:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.904 08:13:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.904 08:13:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.904 08:13:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.904 08:13:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 08:13:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 08:13:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 08:13:18 -- paths/export.sh@5 -- # export PATH 00:12:44.904 08:13:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 08:13:18 -- nvmf/common.sh@46 -- # : 0 00:12:44.905 08:13:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:44.905 08:13:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:44.905 08:13:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:44.905 08:13:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.905 08:13:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.905 08:13:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:44.905 08:13:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:44.905 08:13:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:44.905 08:13:18 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:44.905 08:13:18 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.905 08:13:18 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:44.905 08:13:18 -- target/invalid.sh@14 -- # target=foobar 00:12:44.905 08:13:18 -- target/invalid.sh@16 -- # RANDOM=0 00:12:44.905 08:13:18 -- target/invalid.sh@34 -- # nvmftestinit 00:12:44.905 08:13:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:44.905 08:13:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.905 08:13:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:44.905 08:13:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:44.905 08:13:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:44.905 08:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.905 08:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.905 08:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.905 08:13:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:44.905 08:13:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:44.905 08:13:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:44.905 08:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:51.473 08:13:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:51.473 08:13:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:51.474 08:13:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:51.474 08:13:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:51.474 08:13:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:51.474 08:13:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:51.474 08:13:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:51.474 08:13:23 -- nvmf/common.sh@294 -- # net_devs=() 00:12:51.474 08:13:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:51.474 08:13:23 -- nvmf/common.sh@295 -- # e810=() 00:12:51.474 08:13:23 -- nvmf/common.sh@295 -- # local -ga e810 00:12:51.474 08:13:23 -- nvmf/common.sh@296 -- # x722=() 00:12:51.474 08:13:23 -- nvmf/common.sh@296 -- # local -ga x722 00:12:51.474 08:13:23 -- nvmf/common.sh@297 -- # mlx=() 00:12:51.474 08:13:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:51.474 08:13:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.474 08:13:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:51.474 08:13:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:51.474 08:13:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.474 08:13:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:51.474 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:51.474 08:13:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.474 08:13:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:51.474 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:51.474 08:13:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.474 08:13:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.474 08:13:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.474 08:13:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:51.474 Found net devices under 0000:af:00.0: cvl_0_0 00:12:51.474 08:13:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.474 08:13:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.474 08:13:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.474 08:13:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.474 08:13:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:51.474 Found net devices under 0000:af:00.1: cvl_0_1 00:12:51.474 08:13:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.474 08:13:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:51.474 08:13:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:51.474 08:13:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:51.474 08:13:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.474 08:13:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.474 08:13:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.474 08:13:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:51.474 08:13:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.474 08:13:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.474 08:13:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:51.474 08:13:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.474 08:13:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.474 08:13:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:51.474 08:13:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:51.474 08:13:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.474 08:13:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.474 08:13:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.474 08:13:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.474 08:13:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:51.474 08:13:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.474 08:13:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.474 08:13:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.474 08:13:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:51.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:51.474 00:12:51.474 --- 10.0.0.2 ping statistics --- 00:12:51.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.474 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:51.474 08:13:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:12:51.474 00:12:51.474 --- 10.0.0.1 ping statistics --- 00:12:51.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.474 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:51.474 08:13:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.474 08:13:24 -- nvmf/common.sh@410 -- # return 0 00:12:51.474 08:13:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:51.474 08:13:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.474 08:13:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:51.474 08:13:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:51.474 08:13:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.474 08:13:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:51.474 08:13:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:51.474 08:13:24 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:51.474 08:13:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:51.474 08:13:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:51.474 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:51.474 08:13:24 -- nvmf/common.sh@469 -- # nvmfpid=2179363 00:12:51.474 08:13:24 -- nvmf/common.sh@470 -- # waitforlisten 2179363 00:12:51.474 08:13:24 -- common/autotest_common.sh@817 -- # '[' -z 2179363 ']' 00:12:51.474 08:13:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.474 08:13:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:51.474 08:13:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.474 08:13:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:51.474 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:51.474 08:13:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.474 [2024-02-13 08:13:24.164119] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:51.474 [2024-02-13 08:13:24.164161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.474 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.474 [2024-02-13 08:13:24.226172] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.474 [2024-02-13 08:13:24.302974] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.474 [2024-02-13 08:13:24.303082] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.474 [2024-02-13 08:13:24.303090] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.474 [2024-02-13 08:13:24.303096] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.474 [2024-02-13 08:13:24.303141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.474 [2024-02-13 08:13:24.303157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.474 [2024-02-13 08:13:24.303265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.474 [2024-02-13 08:13:24.303266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.474 08:13:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:51.474 08:13:24 -- common/autotest_common.sh@850 -- # return 0 00:12:51.474 08:13:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:51.474 08:13:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:51.474 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:51.474 08:13:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.475 08:13:24 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:51.475 08:13:24 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30443 00:12:51.475 [2024-02-13 08:13:25.144350] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:51.734 08:13:25 -- target/invalid.sh@40 -- # out='request: 00:12:51.734 { 00:12:51.734 "nqn": "nqn.2016-06.io.spdk:cnode30443", 00:12:51.734 "tgt_name": "foobar", 00:12:51.734 "method": "nvmf_create_subsystem", 00:12:51.734 "req_id": 1 00:12:51.734 } 00:12:51.734 Got JSON-RPC error response 00:12:51.734 response: 00:12:51.734 { 00:12:51.734 "code": -32603, 00:12:51.734 "message": "Unable to find target foobar" 00:12:51.734 }' 00:12:51.734 08:13:25 -- target/invalid.sh@41 -- # [[ request: 00:12:51.734 { 00:12:51.734 "nqn": "nqn.2016-06.io.spdk:cnode30443", 00:12:51.734 "tgt_name": "foobar", 00:12:51.734 "method": "nvmf_create_subsystem", 00:12:51.734 "req_id": 1 00:12:51.734 } 00:12:51.734 Got JSON-RPC error response 00:12:51.734 response: 00:12:51.734 { 00:12:51.734 "code": -32603, 00:12:51.734 "message": "Unable to find target foobar" 00:12:51.734 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:51.734 08:13:25 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:51.734 08:13:25 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6656 00:12:51.734 [2024-02-13 08:13:25.324987] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6656: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:51.734 08:13:25 -- target/invalid.sh@45 -- # out='request: 00:12:51.734 { 00:12:51.734 "nqn": "nqn.2016-06.io.spdk:cnode6656", 00:12:51.734 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:51.734 "method": "nvmf_create_subsystem", 00:12:51.734 "req_id": 1 00:12:51.734 } 00:12:51.734 Got JSON-RPC error response 00:12:51.734 response: 00:12:51.734 { 00:12:51.734 "code": -32602, 00:12:51.734 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:51.734 }' 00:12:51.734 08:13:25 -- target/invalid.sh@46 -- # [[ request: 00:12:51.734 { 00:12:51.734 "nqn": "nqn.2016-06.io.spdk:cnode6656", 00:12:51.734 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:51.734 "method": "nvmf_create_subsystem", 00:12:51.734 "req_id": 1 00:12:51.734 } 00:12:51.734 Got JSON-RPC error response 00:12:51.734 response: 00:12:51.734 { 00:12:51.734 "code": -32602, 00:12:51.734 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:51.734 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:51.734 08:13:25 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:51.734 08:13:25 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28327 00:12:51.994 [2024-02-13 08:13:25.505560] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28327: invalid model number 'SPDK_Controller' 00:12:51.994 08:13:25 -- target/invalid.sh@50 -- # out='request: 00:12:51.994 { 00:12:51.994 "nqn": "nqn.2016-06.io.spdk:cnode28327", 00:12:51.994 "model_number": "SPDK_Controller\u001f", 00:12:51.994 "method": "nvmf_create_subsystem", 00:12:51.994 "req_id": 1 00:12:51.994 } 00:12:51.994 Got JSON-RPC error response 00:12:51.994 response: 00:12:51.994 { 00:12:51.994 "code": -32602, 00:12:51.994 "message": "Invalid MN SPDK_Controller\u001f" 00:12:51.994 }' 00:12:51.994 08:13:25 -- target/invalid.sh@51 -- # [[ request: 00:12:51.994 { 00:12:51.994 "nqn": "nqn.2016-06.io.spdk:cnode28327", 00:12:51.994 "model_number": "SPDK_Controller\u001f", 00:12:51.994 "method": "nvmf_create_subsystem", 00:12:51.994 "req_id": 1 00:12:51.994 } 00:12:51.994 Got JSON-RPC error response 00:12:51.994 response: 00:12:51.994 { 00:12:51.994 "code": -32602, 00:12:51.994 "message": "Invalid MN SPDK_Controller\u001f" 00:12:51.994 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:51.994 08:13:25 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:51.994 08:13:25 -- target/invalid.sh@19 -- # local length=21 ll 00:12:51.994 08:13:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:51.994 08:13:25 -- target/invalid.sh@21 -- # local chars 00:12:51.994 08:13:25 -- target/invalid.sh@22 -- # local string 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 105 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=i 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 106 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=j 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 105 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=i 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 92 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+='\' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 39 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=\' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 50 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=2 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 65 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=A 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 57 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=9 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 113 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=q 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 41 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=')' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 84 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=T 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 92 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+='\' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 111 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=o 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 85 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=U 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 57 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=9 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 67 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=C 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 36 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+='$' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 77 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=M 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 51 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+=3 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # printf %x 125 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:51.994 08:13:25 -- target/invalid.sh@25 -- # string+='}' 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.994 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.995 08:13:25 -- target/invalid.sh@25 -- # printf %x 101 00:12:51.995 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:51.995 08:13:25 -- target/invalid.sh@25 -- # string+=e 00:12:51.995 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:51.995 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:51.995 08:13:25 -- target/invalid.sh@28 -- # [[ i == \- ]] 00:12:51.995 08:13:25 -- target/invalid.sh@31 -- # echo 'iji\'\''2A9q)T\oU9C$M3}e' 00:12:51.995 08:13:25 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'iji\'\''2A9q)T\oU9C$M3}e' nqn.2016-06.io.spdk:cnode30710 00:12:52.255 [2024-02-13 08:13:25.830666] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30710: invalid serial number 'iji\'2A9q)T\oU9C$M3}e' 00:12:52.255 08:13:25 -- target/invalid.sh@54 -- # out='request: 00:12:52.255 { 00:12:52.255 "nqn": "nqn.2016-06.io.spdk:cnode30710", 00:12:52.255 "serial_number": "iji\\'\''2A9q)T\\oU9C$M3}e", 00:12:52.255 "method": "nvmf_create_subsystem", 00:12:52.255 "req_id": 1 00:12:52.255 } 00:12:52.255 Got JSON-RPC error response 00:12:52.255 response: 00:12:52.255 { 00:12:52.255 "code": -32602, 00:12:52.255 "message": "Invalid SN iji\\'\''2A9q)T\\oU9C$M3}e" 00:12:52.255 }' 00:12:52.255 08:13:25 -- target/invalid.sh@55 -- # [[ request: 00:12:52.255 { 00:12:52.255 "nqn": "nqn.2016-06.io.spdk:cnode30710", 00:12:52.255 "serial_number": "iji\\'2A9q)T\\oU9C$M3}e", 00:12:52.255 "method": "nvmf_create_subsystem", 00:12:52.255 "req_id": 1 00:12:52.255 } 00:12:52.255 Got JSON-RPC error response 00:12:52.255 response: 00:12:52.255 { 00:12:52.255 "code": -32602, 00:12:52.255 "message": "Invalid SN iji\\'2A9q)T\\oU9C$M3}e" 00:12:52.255 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.255 08:13:25 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:52.255 08:13:25 -- target/invalid.sh@19 -- # local length=41 ll 00:12:52.255 08:13:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.255 08:13:25 -- target/invalid.sh@21 -- # local chars 00:12:52.255 08:13:25 -- target/invalid.sh@22 -- # local string 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 126 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+='~' 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 125 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+='}' 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 66 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=B 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 82 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=R 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 98 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=b 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 75 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=K 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 111 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=o 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 65 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=A 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 100 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=d 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 74 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=J 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 106 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=j 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # printf %x 52 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:52.255 08:13:25 -- target/invalid.sh@25 -- # string+=4 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.255 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.514 08:13:25 -- target/invalid.sh@25 -- # printf %x 117 00:12:52.514 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:52.514 08:13:25 -- target/invalid.sh@25 -- # string+=u 00:12:52.514 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.514 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 59 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=';' 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 113 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=q 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 104 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=h 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 106 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=j 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 89 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=Y 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 34 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+='"' 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 46 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=. 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 54 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=6 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # printf %x 127 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:52.515 08:13:25 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 93 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=']' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 127 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 117 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=u 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 113 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=q 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 56 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=8 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 69 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=E 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 120 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=x 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 45 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=- 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 35 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+='#' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 116 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=t 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 91 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+='[' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 114 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=r 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 95 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=_ 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 37 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=% 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 52 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=4 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 71 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=G 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 36 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+='$' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 124 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+='|' 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # printf %x 64 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:52.515 08:13:26 -- target/invalid.sh@25 -- # string+=@ 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.515 08:13:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.515 08:13:26 -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:12:52.515 08:13:26 -- target/invalid.sh@31 -- # echo '~}BRbKoAdJj4u;qhjY".6]uq8Ex-#t[r_%4G$|@' 00:12:52.515 08:13:26 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~}BRbKoAdJj4u;qhjY".6]uq8Ex-#t[r_%4G$|@' nqn.2016-06.io.spdk:cnode9578 00:12:52.776 [2024-02-13 08:13:26.264094] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9578: invalid model number '~}BRbKoAdJj4u;qhjY".6]uq8Ex-#t[r_%4G$|@' 00:12:52.776 08:13:26 -- target/invalid.sh@58 -- # out='request: 00:12:52.776 { 00:12:52.776 "nqn": "nqn.2016-06.io.spdk:cnode9578", 00:12:52.776 "model_number": "~}BRbKoAdJj4u;qhjY\".6\u007f]\u007fuq8Ex-#t[r_%4G$|@", 00:12:52.776 "method": "nvmf_create_subsystem", 00:12:52.776 "req_id": 1 00:12:52.776 } 00:12:52.776 Got JSON-RPC error response 00:12:52.776 response: 00:12:52.776 { 00:12:52.776 "code": -32602, 00:12:52.776 "message": "Invalid MN ~}BRbKoAdJj4u;qhjY\".6\u007f]\u007fuq8Ex-#t[r_%4G$|@" 00:12:52.776 }' 00:12:52.776 08:13:26 -- target/invalid.sh@59 -- # [[ request: 00:12:52.776 { 00:12:52.776 "nqn": "nqn.2016-06.io.spdk:cnode9578", 00:12:52.776 "model_number": "~}BRbKoAdJj4u;qhjY\".6\u007f]\u007fuq8Ex-#t[r_%4G$|@", 00:12:52.776 "method": "nvmf_create_subsystem", 00:12:52.776 "req_id": 1 00:12:52.776 } 00:12:52.776 Got JSON-RPC error response 00:12:52.776 response: 00:12:52.776 { 00:12:52.776 "code": -32602, 00:12:52.776 "message": "Invalid MN ~}BRbKoAdJj4u;qhjY\".6\u007f]\u007fuq8Ex-#t[r_%4G$|@" 00:12:52.776 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:52.777 08:13:26 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:52.777 [2024-02-13 08:13:26.440747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.068 08:13:26 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:53.068 08:13:26 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:53.068 08:13:26 -- target/invalid.sh@67 -- # echo '' 00:12:53.068 08:13:26 -- target/invalid.sh@67 -- # head -n 1 00:12:53.068 08:13:26 -- target/invalid.sh@67 -- # IP= 00:12:53.068 08:13:26 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:53.429 [2024-02-13 08:13:26.803301] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:53.429 08:13:26 -- target/invalid.sh@69 -- # out='request: 00:12:53.429 { 00:12:53.429 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:53.429 "listen_address": { 00:12:53.429 "trtype": "tcp", 00:12:53.429 "traddr": "", 00:12:53.429 "trsvcid": "4421" 00:12:53.429 }, 00:12:53.429 "method": "nvmf_subsystem_remove_listener", 00:12:53.429 "req_id": 1 00:12:53.429 } 00:12:53.429 Got JSON-RPC error response 00:12:53.429 response: 00:12:53.429 { 00:12:53.429 "code": -32602, 00:12:53.429 "message": "Invalid parameters" 00:12:53.429 }' 00:12:53.429 08:13:26 -- target/invalid.sh@70 -- # [[ request: 00:12:53.429 { 00:12:53.429 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:53.429 "listen_address": { 00:12:53.429 "trtype": "tcp", 00:12:53.429 "traddr": "", 00:12:53.429 "trsvcid": "4421" 00:12:53.429 }, 00:12:53.429 "method": "nvmf_subsystem_remove_listener", 00:12:53.429 "req_id": 1 00:12:53.429 } 00:12:53.429 Got JSON-RPC error response 00:12:53.429 response: 00:12:53.429 { 00:12:53.429 "code": -32602, 00:12:53.429 "message": "Invalid parameters" 00:12:53.429 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:53.429 08:13:26 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15845 -i 0 00:12:53.429 [2024-02-13 08:13:26.983865] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15845: invalid cntlid range [0-65519] 00:12:53.429 08:13:27 -- target/invalid.sh@73 -- # out='request: 00:12:53.429 { 00:12:53.429 "nqn": "nqn.2016-06.io.spdk:cnode15845", 00:12:53.429 "min_cntlid": 0, 00:12:53.429 "method": "nvmf_create_subsystem", 00:12:53.429 "req_id": 1 00:12:53.429 } 00:12:53.429 Got JSON-RPC error response 00:12:53.429 response: 00:12:53.429 { 00:12:53.429 "code": -32602, 00:12:53.429 "message": "Invalid cntlid range [0-65519]" 00:12:53.429 }' 00:12:53.429 08:13:27 -- target/invalid.sh@74 -- # [[ request: 00:12:53.429 { 00:12:53.429 "nqn": "nqn.2016-06.io.spdk:cnode15845", 00:12:53.429 "min_cntlid": 0, 00:12:53.429 "method": "nvmf_create_subsystem", 00:12:53.429 "req_id": 1 00:12:53.429 } 00:12:53.429 Got JSON-RPC error response 00:12:53.429 response: 00:12:53.429 { 00:12:53.429 "code": -32602, 00:12:53.429 "message": "Invalid cntlid range [0-65519]" 00:12:53.429 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:53.429 08:13:27 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9883 -i 65520 00:12:53.688 [2024-02-13 08:13:27.180527] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9883: invalid cntlid range [65520-65519] 00:12:53.688 08:13:27 -- target/invalid.sh@75 -- # out='request: 00:12:53.688 { 00:12:53.688 "nqn": "nqn.2016-06.io.spdk:cnode9883", 00:12:53.688 "min_cntlid": 65520, 00:12:53.688 "method": "nvmf_create_subsystem", 00:12:53.688 "req_id": 1 00:12:53.688 } 00:12:53.688 Got JSON-RPC error response 00:12:53.688 response: 00:12:53.688 { 00:12:53.688 "code": -32602, 00:12:53.688 "message": "Invalid cntlid range [65520-65519]" 00:12:53.688 }' 00:12:53.688 08:13:27 -- target/invalid.sh@76 -- # [[ request: 00:12:53.688 { 00:12:53.688 "nqn": "nqn.2016-06.io.spdk:cnode9883", 00:12:53.688 "min_cntlid": 65520, 00:12:53.688 "method": "nvmf_create_subsystem", 00:12:53.688 "req_id": 1 00:12:53.688 } 00:12:53.688 Got JSON-RPC error response 00:12:53.688 response: 00:12:53.688 { 00:12:53.688 "code": -32602, 00:12:53.688 "message": "Invalid cntlid range [65520-65519]" 00:12:53.688 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:53.688 08:13:27 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14421 -I 0 00:12:53.688 [2024-02-13 08:13:27.373255] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14421: invalid cntlid range [1-0] 00:12:53.947 08:13:27 -- target/invalid.sh@77 -- # out='request: 00:12:53.947 { 00:12:53.947 "nqn": "nqn.2016-06.io.spdk:cnode14421", 00:12:53.947 "max_cntlid": 0, 00:12:53.947 "method": "nvmf_create_subsystem", 00:12:53.947 "req_id": 1 00:12:53.947 } 00:12:53.947 Got JSON-RPC error response 00:12:53.947 response: 00:12:53.947 { 00:12:53.947 "code": -32602, 00:12:53.947 "message": "Invalid cntlid range [1-0]" 00:12:53.947 }' 00:12:53.947 08:13:27 -- target/invalid.sh@78 -- # [[ request: 00:12:53.947 { 00:12:53.947 "nqn": "nqn.2016-06.io.spdk:cnode14421", 00:12:53.947 "max_cntlid": 0, 00:12:53.947 "method": "nvmf_create_subsystem", 00:12:53.947 "req_id": 1 00:12:53.947 } 00:12:53.947 Got JSON-RPC error response 00:12:53.947 response: 00:12:53.947 { 00:12:53.947 "code": -32602, 00:12:53.947 "message": "Invalid cntlid range [1-0]" 00:12:53.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:53.947 08:13:27 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21886 -I 65520 00:12:53.947 [2024-02-13 08:13:27.553833] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21886: invalid cntlid range [1-65520] 00:12:53.947 08:13:27 -- target/invalid.sh@79 -- # out='request: 00:12:53.947 { 00:12:53.947 "nqn": "nqn.2016-06.io.spdk:cnode21886", 00:12:53.947 "max_cntlid": 65520, 00:12:53.947 "method": "nvmf_create_subsystem", 00:12:53.947 "req_id": 1 00:12:53.947 } 00:12:53.947 Got JSON-RPC error response 00:12:53.947 response: 00:12:53.947 { 00:12:53.947 "code": -32602, 00:12:53.947 "message": "Invalid cntlid range [1-65520]" 00:12:53.947 }' 00:12:53.947 08:13:27 -- target/invalid.sh@80 -- # [[ request: 00:12:53.947 { 00:12:53.947 "nqn": "nqn.2016-06.io.spdk:cnode21886", 00:12:53.947 "max_cntlid": 65520, 00:12:53.947 "method": "nvmf_create_subsystem", 00:12:53.947 "req_id": 1 00:12:53.947 } 00:12:53.947 Got JSON-RPC error response 00:12:53.947 response: 00:12:53.947 { 00:12:53.947 "code": -32602, 00:12:53.947 "message": "Invalid cntlid range [1-65520]" 00:12:53.947 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:53.947 08:13:27 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12587 -i 6 -I 5 00:12:54.206 [2024-02-13 08:13:27.722433] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12587: invalid cntlid range [6-5] 00:12:54.206 08:13:27 -- target/invalid.sh@83 -- # out='request: 00:12:54.206 { 00:12:54.206 "nqn": "nqn.2016-06.io.spdk:cnode12587", 00:12:54.206 "min_cntlid": 6, 00:12:54.206 "max_cntlid": 5, 00:12:54.206 "method": "nvmf_create_subsystem", 00:12:54.206 "req_id": 1 00:12:54.206 } 00:12:54.206 Got JSON-RPC error response 00:12:54.206 response: 00:12:54.206 { 00:12:54.206 "code": -32602, 00:12:54.206 "message": "Invalid cntlid range [6-5]" 00:12:54.206 }' 00:12:54.206 08:13:27 -- target/invalid.sh@84 -- # [[ request: 00:12:54.206 { 00:12:54.206 "nqn": "nqn.2016-06.io.spdk:cnode12587", 00:12:54.206 "min_cntlid": 6, 00:12:54.206 "max_cntlid": 5, 00:12:54.206 "method": "nvmf_create_subsystem", 00:12:54.206 "req_id": 1 00:12:54.206 } 00:12:54.206 Got JSON-RPC error response 00:12:54.206 response: 00:12:54.206 { 00:12:54.206 "code": -32602, 00:12:54.206 "message": "Invalid cntlid range [6-5]" 00:12:54.206 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.206 08:13:27 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:54.206 08:13:27 -- target/invalid.sh@87 -- # out='request: 00:12:54.206 { 00:12:54.206 "name": "foobar", 00:12:54.206 "method": "nvmf_delete_target", 00:12:54.206 "req_id": 1 00:12:54.206 } 00:12:54.206 Got JSON-RPC error response 00:12:54.206 response: 00:12:54.206 { 00:12:54.206 "code": -32602, 00:12:54.206 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:54.206 }' 00:12:54.206 08:13:27 -- target/invalid.sh@88 -- # [[ request: 00:12:54.206 { 00:12:54.206 "name": "foobar", 00:12:54.206 "method": "nvmf_delete_target", 00:12:54.206 "req_id": 1 00:12:54.206 } 00:12:54.206 Got JSON-RPC error response 00:12:54.206 response: 00:12:54.206 { 00:12:54.206 "code": -32602, 00:12:54.206 "message": "The specified target doesn't exist, cannot delete it." 00:12:54.206 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:54.206 08:13:27 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:54.206 08:13:27 -- target/invalid.sh@91 -- # nvmftestfini 00:12:54.206 08:13:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:54.206 08:13:27 -- nvmf/common.sh@116 -- # sync 00:12:54.206 08:13:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:54.206 08:13:27 -- nvmf/common.sh@119 -- # set +e 00:12:54.206 08:13:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:54.206 08:13:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:54.206 rmmod nvme_tcp 00:12:54.206 rmmod nvme_fabrics 00:12:54.466 rmmod nvme_keyring 00:12:54.466 08:13:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:54.466 08:13:27 -- nvmf/common.sh@123 -- # set -e 00:12:54.466 08:13:27 -- nvmf/common.sh@124 -- # return 0 00:12:54.466 08:13:27 -- nvmf/common.sh@477 -- # '[' -n 2179363 ']' 00:12:54.466 08:13:27 -- nvmf/common.sh@478 -- # killprocess 2179363 00:12:54.466 08:13:27 -- common/autotest_common.sh@924 -- # '[' -z 2179363 ']' 00:12:54.466 08:13:27 -- common/autotest_common.sh@928 -- # kill -0 2179363 00:12:54.466 08:13:27 -- common/autotest_common.sh@929 -- # uname 00:12:54.466 08:13:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:54.466 08:13:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2179363 00:12:54.466 08:13:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:54.466 08:13:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:54.466 08:13:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2179363' 00:12:54.466 killing process with pid 2179363 00:12:54.466 08:13:27 -- common/autotest_common.sh@943 -- # kill 2179363 00:12:54.466 08:13:27 -- common/autotest_common.sh@948 -- # wait 2179363 00:12:54.725 08:13:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:54.725 08:13:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:54.725 08:13:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:54.725 08:13:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.725 08:13:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:54.725 08:13:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.725 08:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.725 08:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.632 08:13:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:56.632 00:12:56.632 real 0m12.077s 00:12:56.632 user 0m19.102s 00:12:56.632 sys 0m5.326s 00:12:56.632 08:13:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.632 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.632 ************************************ 00:12:56.632 END TEST nvmf_invalid 00:12:56.632 ************************************ 00:12:56.632 08:13:30 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:56.632 08:13:30 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:56.632 08:13:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:56.632 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:56.632 ************************************ 00:12:56.632 START TEST nvmf_abort 00:12:56.632 ************************************ 00:12:56.632 08:13:30 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:56.892 * Looking for test storage... 00:12:56.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.892 08:13:30 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.892 08:13:30 -- nvmf/common.sh@7 -- # uname -s 00:12:56.892 08:13:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.892 08:13:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.892 08:13:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.892 08:13:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.892 08:13:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.892 08:13:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.892 08:13:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.892 08:13:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.892 08:13:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.892 08:13:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.892 08:13:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:56.892 08:13:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:56.892 08:13:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.892 08:13:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.892 08:13:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.892 08:13:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.892 08:13:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.892 08:13:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.892 08:13:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.892 08:13:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.892 08:13:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.892 08:13:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.892 08:13:30 -- paths/export.sh@5 -- # export PATH 00:12:56.892 08:13:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.892 08:13:30 -- nvmf/common.sh@46 -- # : 0 00:12:56.892 08:13:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.892 08:13:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.892 08:13:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.892 08:13:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.892 08:13:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.892 08:13:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.892 08:13:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.892 08:13:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.892 08:13:30 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.892 08:13:30 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:56.892 08:13:30 -- target/abort.sh@14 -- # nvmftestinit 00:12:56.892 08:13:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.892 08:13:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.892 08:13:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.892 08:13:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.892 08:13:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.892 08:13:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.892 08:13:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.892 08:13:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.892 08:13:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:56.892 08:13:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:56.892 08:13:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:56.892 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:13:03.460 08:13:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:03.460 08:13:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:03.460 08:13:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:03.460 08:13:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:03.460 08:13:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:03.460 08:13:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:03.460 08:13:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:03.460 08:13:35 -- nvmf/common.sh@294 -- # net_devs=() 00:13:03.460 08:13:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:03.460 08:13:35 -- nvmf/common.sh@295 -- # e810=() 00:13:03.460 08:13:35 -- nvmf/common.sh@295 -- # local -ga e810 00:13:03.460 08:13:35 -- nvmf/common.sh@296 -- # x722=() 00:13:03.460 08:13:35 -- nvmf/common.sh@296 -- # local -ga x722 00:13:03.460 08:13:35 -- nvmf/common.sh@297 -- # mlx=() 00:13:03.460 08:13:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:03.460 08:13:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.460 08:13:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:03.460 08:13:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:03.460 08:13:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:03.460 08:13:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.460 08:13:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:03.460 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:03.460 08:13:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.460 08:13:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:03.460 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:03.460 08:13:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:03.460 08:13:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:03.460 08:13:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.460 08:13:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.460 08:13:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.460 08:13:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.460 08:13:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:03.460 Found net devices under 0000:af:00.0: cvl_0_0 00:13:03.460 08:13:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.460 08:13:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.460 08:13:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.460 08:13:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.460 08:13:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.460 08:13:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:03.460 Found net devices under 0000:af:00.1: cvl_0_1 00:13:03.460 08:13:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.460 08:13:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:03.460 08:13:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:03.460 08:13:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:03.460 08:13:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:03.460 08:13:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:03.460 08:13:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.460 08:13:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.460 08:13:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.460 08:13:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:03.460 08:13:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.460 08:13:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.460 08:13:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:03.460 08:13:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.460 08:13:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.460 08:13:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:03.460 08:13:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:03.460 08:13:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.460 08:13:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.460 08:13:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.461 08:13:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.461 08:13:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:03.461 08:13:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.461 08:13:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.461 08:13:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.461 08:13:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:03.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:13:03.461 00:13:03.461 --- 10.0.0.2 ping statistics --- 00:13:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.461 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:03.461 08:13:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:13:03.461 00:13:03.461 --- 10.0.0.1 ping statistics --- 00:13:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.461 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:13:03.461 08:13:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.461 08:13:36 -- nvmf/common.sh@410 -- # return 0 00:13:03.461 08:13:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.461 08:13:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.461 08:13:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.461 08:13:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.461 08:13:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.461 08:13:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.461 08:13:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.461 08:13:36 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:03.461 08:13:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.461 08:13:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:03.461 08:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:03.461 08:13:36 -- nvmf/common.sh@469 -- # nvmfpid=2184020 00:13:03.461 08:13:36 -- nvmf/common.sh@470 -- # waitforlisten 2184020 00:13:03.461 08:13:36 -- common/autotest_common.sh@817 -- # '[' -z 2184020 ']' 00:13:03.461 08:13:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.461 08:13:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:03.461 08:13:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.461 08:13:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:03.461 08:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:03.461 08:13:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.461 [2024-02-13 08:13:36.320131] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:03.461 [2024-02-13 08:13:36.320176] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.461 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.461 [2024-02-13 08:13:36.382060] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.461 [2024-02-13 08:13:36.457958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.461 [2024-02-13 08:13:36.458077] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.461 [2024-02-13 08:13:36.458085] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.461 [2024-02-13 08:13:36.458091] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.461 [2024-02-13 08:13:36.458125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.461 [2024-02-13 08:13:36.458145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.461 [2024-02-13 08:13:36.458146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.461 08:13:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:03.461 08:13:37 -- common/autotest_common.sh@850 -- # return 0 00:13:03.461 08:13:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:03.461 08:13:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:03.461 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 08:13:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.720 08:13:37 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-02-13 08:13:37.153568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 Malloc0 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 Delay0 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 [2024-02-13 08:13:37.224528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.720 08:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.720 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:03.720 08:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.720 08:13:37 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:03.720 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.720 [2024-02-13 08:13:37.378826] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:06.253 Initializing NVMe Controllers 00:13:06.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:06.253 controller IO queue size 128 less than required 00:13:06.253 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:06.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:06.253 Initialization complete. Launching workers. 00:13:06.253 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 43320 00:13:06.253 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43383, failed to submit 62 00:13:06.253 success 43320, unsuccess 63, failed 0 00:13:06.253 08:13:39 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:06.253 08:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.253 08:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:06.253 08:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.253 08:13:39 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:06.253 08:13:39 -- target/abort.sh@38 -- # nvmftestfini 00:13:06.253 08:13:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.253 08:13:39 -- nvmf/common.sh@116 -- # sync 00:13:06.253 08:13:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.253 08:13:39 -- nvmf/common.sh@119 -- # set +e 00:13:06.253 08:13:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.253 08:13:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.253 rmmod nvme_tcp 00:13:06.253 rmmod nvme_fabrics 00:13:06.253 rmmod nvme_keyring 00:13:06.253 08:13:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.253 08:13:39 -- nvmf/common.sh@123 -- # set -e 00:13:06.253 08:13:39 -- nvmf/common.sh@124 -- # return 0 00:13:06.253 08:13:39 -- nvmf/common.sh@477 -- # '[' -n 2184020 ']' 00:13:06.253 08:13:39 -- nvmf/common.sh@478 -- # killprocess 2184020 00:13:06.253 08:13:39 -- common/autotest_common.sh@924 -- # '[' -z 2184020 ']' 00:13:06.253 08:13:39 -- common/autotest_common.sh@928 -- # kill -0 2184020 00:13:06.253 08:13:39 -- common/autotest_common.sh@929 -- # uname 00:13:06.253 08:13:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:06.253 08:13:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2184020 00:13:06.253 08:13:39 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:06.253 08:13:39 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:06.253 08:13:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2184020' 00:13:06.253 killing process with pid 2184020 00:13:06.253 08:13:39 -- common/autotest_common.sh@943 -- # kill 2184020 00:13:06.253 08:13:39 -- common/autotest_common.sh@948 -- # wait 2184020 00:13:06.253 08:13:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:06.253 08:13:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:06.253 08:13:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:06.253 08:13:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.253 08:13:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:06.253 08:13:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.253 08:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.253 08:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.790 08:13:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:08.790 00:13:08.790 real 0m11.677s 00:13:08.790 user 0m13.665s 00:13:08.790 sys 0m5.370s 00:13:08.790 08:13:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.790 08:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:08.790 ************************************ 00:13:08.790 END TEST nvmf_abort 00:13:08.790 ************************************ 00:13:08.790 08:13:41 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:08.790 08:13:41 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:08.790 08:13:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:08.790 08:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:08.790 ************************************ 00:13:08.790 START TEST nvmf_ns_hotplug_stress 00:13:08.790 ************************************ 00:13:08.790 08:13:41 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:08.790 * Looking for test storage... 00:13:08.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.790 08:13:42 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.790 08:13:42 -- nvmf/common.sh@7 -- # uname -s 00:13:08.790 08:13:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.790 08:13:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.790 08:13:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.790 08:13:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.790 08:13:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.790 08:13:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.790 08:13:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.790 08:13:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.790 08:13:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.790 08:13:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.790 08:13:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:08.790 08:13:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:08.790 08:13:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.790 08:13:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.790 08:13:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.790 08:13:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.790 08:13:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.790 08:13:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.790 08:13:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.790 08:13:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.790 08:13:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.790 08:13:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.790 08:13:42 -- paths/export.sh@5 -- # export PATH 00:13:08.790 08:13:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.790 08:13:42 -- nvmf/common.sh@46 -- # : 0 00:13:08.790 08:13:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:08.790 08:13:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:08.791 08:13:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:08.791 08:13:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.791 08:13:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.791 08:13:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:08.791 08:13:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:08.791 08:13:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:08.791 08:13:42 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.791 08:13:42 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:08.791 08:13:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:08.791 08:13:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.791 08:13:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:08.791 08:13:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:08.791 08:13:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:08.791 08:13:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.791 08:13:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.791 08:13:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.791 08:13:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:08.791 08:13:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:08.791 08:13:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:08.791 08:13:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.067 08:13:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:14.067 08:13:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:14.067 08:13:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:14.067 08:13:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:14.067 08:13:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:14.067 08:13:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:14.067 08:13:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:14.067 08:13:47 -- nvmf/common.sh@294 -- # net_devs=() 00:13:14.067 08:13:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:14.067 08:13:47 -- nvmf/common.sh@295 -- # e810=() 00:13:14.067 08:13:47 -- nvmf/common.sh@295 -- # local -ga e810 00:13:14.067 08:13:47 -- nvmf/common.sh@296 -- # x722=() 00:13:14.067 08:13:47 -- nvmf/common.sh@296 -- # local -ga x722 00:13:14.067 08:13:47 -- nvmf/common.sh@297 -- # mlx=() 00:13:14.067 08:13:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:14.067 08:13:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.067 08:13:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:14.067 08:13:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:14.067 08:13:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.067 08:13:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:14.067 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:14.067 08:13:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.067 08:13:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:14.067 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:14.067 08:13:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.067 08:13:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.067 08:13:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.067 08:13:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:14.067 Found net devices under 0000:af:00.0: cvl_0_0 00:13:14.067 08:13:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.067 08:13:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.067 08:13:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.067 08:13:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.067 08:13:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:14.067 Found net devices under 0000:af:00.1: cvl_0_1 00:13:14.067 08:13:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.067 08:13:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:14.067 08:13:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:14.067 08:13:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:14.067 08:13:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.067 08:13:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.067 08:13:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.067 08:13:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:14.067 08:13:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.067 08:13:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.067 08:13:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:14.067 08:13:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.067 08:13:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.067 08:13:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:14.067 08:13:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:14.067 08:13:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.067 08:13:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.067 08:13:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.067 08:13:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.067 08:13:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:14.067 08:13:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.327 08:13:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.327 08:13:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.327 08:13:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:14.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:13:14.327 00:13:14.327 --- 10.0.0.2 ping statistics --- 00:13:14.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.327 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:14.327 08:13:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:13:14.327 00:13:14.327 --- 10.0.0.1 ping statistics --- 00:13:14.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.327 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:14.327 08:13:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.327 08:13:47 -- nvmf/common.sh@410 -- # return 0 00:13:14.327 08:13:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:14.327 08:13:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.327 08:13:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:14.327 08:13:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:14.327 08:13:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.327 08:13:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:14.327 08:13:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:14.327 08:13:47 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:14.327 08:13:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:14.328 08:13:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:14.328 08:13:47 -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 08:13:47 -- nvmf/common.sh@469 -- # nvmfpid=2188457 00:13:14.328 08:13:47 -- nvmf/common.sh@470 -- # waitforlisten 2188457 00:13:14.328 08:13:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:14.328 08:13:47 -- common/autotest_common.sh@817 -- # '[' -z 2188457 ']' 00:13:14.328 08:13:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.328 08:13:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.328 08:13:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.328 08:13:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.328 08:13:47 -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 [2024-02-13 08:13:47.879750] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:14.328 [2024-02-13 08:13:47.879792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.328 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.328 [2024-02-13 08:13:47.943741] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.328 [2024-02-13 08:13:48.013447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.328 [2024-02-13 08:13:48.013562] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.328 [2024-02-13 08:13:48.013569] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.328 [2024-02-13 08:13:48.013575] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.328 [2024-02-13 08:13:48.013676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.328 [2024-02-13 08:13:48.013764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.328 [2024-02-13 08:13:48.013765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.266 08:13:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:15.266 08:13:48 -- common/autotest_common.sh@850 -- # return 0 00:13:15.266 08:13:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.266 08:13:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:15.266 08:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:15.266 08:13:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.266 08:13:48 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:15.266 08:13:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.266 [2024-02-13 08:13:48.860648] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.266 08:13:48 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.525 08:13:49 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.785 [2024-02-13 08:13:49.234020] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.785 08:13:49 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.785 08:13:49 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:16.044 Malloc0 00:13:16.044 08:13:49 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:16.304 Delay0 00:13:16.304 08:13:49 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.304 08:13:49 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:16.563 NULL1 00:13:16.563 08:13:50 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:16.822 08:13:50 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2188796 00:13:16.822 08:13:50 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:16.822 08:13:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:16.822 08:13:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.822 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.082 08:13:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.082 08:13:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:17.082 08:13:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:17.341 true 00:13:17.341 08:13:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:17.341 08:13:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.600 08:13:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.600 08:13:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:17.600 08:13:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:17.859 true 00:13:17.859 08:13:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:17.859 08:13:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.118 08:13:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.118 08:13:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:18.118 08:13:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:18.378 true 00:13:18.378 08:13:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:18.378 08:13:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.637 08:13:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.637 08:13:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:18.637 08:13:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:18.900 true 00:13:18.900 08:13:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:18.900 08:13:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.159 08:13:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.159 08:13:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:19.159 08:13:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:19.418 true 00:13:19.418 08:13:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:19.418 08:13:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.678 08:13:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.678 08:13:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:19.937 08:13:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:19.937 true 00:13:19.937 08:13:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:19.937 08:13:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.196 08:13:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.456 08:13:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:20.456 08:13:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:20.456 true 00:13:20.456 08:13:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:20.456 08:13:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.716 08:13:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.975 08:13:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:20.975 08:13:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:20.975 true 00:13:20.975 08:13:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:20.975 08:13:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.234 08:13:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.234 Read completed with error (sct=0, sc=11) 00:13:21.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.522 [2024-02-13 08:13:54.949720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.949801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.949854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.949900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.949944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.949987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.950991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.951985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.952994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.953040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.953084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.953127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.522 [2024-02-13 08:13:54.953174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.953982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.954996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.955975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.956969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.523 [2024-02-13 08:13:54.957352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.957950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.958991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.959982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.960984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.961970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.962014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.524 [2024-02-13 08:13:54.962058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.962981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.963964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.964997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.965988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.525 [2024-02-13 08:13:54.966520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.966968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.967974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.968960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.969966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.970010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.970056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.526 [2024-02-13 08:13:54.970102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.970746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.971954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.972967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.973740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.527 [2024-02-13 08:13:54.974591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.528 [2024-02-13 08:13:54.974740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.974983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.975950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.976790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.977998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 08:13:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:21.528 [2024-02-13 08:13:54.978841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.528 [2024-02-13 08:13:54.978956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.978987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 08:13:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:21.529 [2024-02-13 08:13:54.979150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.979768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.980989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.981970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.982849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.529 [2024-02-13 08:13:54.983471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.983959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.984958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.985827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.986976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.530 [2024-02-13 08:13:54.987778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.987817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.987860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.987909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.987955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.987985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.988879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.989972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.990991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.531 [2024-02-13 08:13:54.991546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.991847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.992984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.993998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.994856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.995988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.996032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.996073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.532 [2024-02-13 08:13:54.996109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.996952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.997876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.998988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:54.999987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.533 [2024-02-13 08:13:55.000398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.000890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.001980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.002984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.003951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.004976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.005006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.005037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.005080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.005120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.534 [2024-02-13 08:13:55.005161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.005975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.006977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.007992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.008964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.535 [2024-02-13 08:13:55.009574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.009969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.010998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.011956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.012970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.536 [2024-02-13 08:13:55.013725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.013991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.014982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.015993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.016972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.537 [2024-02-13 08:13:55.017851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.017892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.017931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.017963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.017991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.018971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.019969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.020996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.021969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.022012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.022354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.022406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.538 [2024-02-13 08:13:55.022464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.539 [2024-02-13 08:13:55.022743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.022978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.023992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.024986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.025980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.539 [2024-02-13 08:13:55.026834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.026872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.026912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.026956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.026991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.027985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.028994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.029993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.030970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.031017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.031062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.031115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.031161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.540 [2024-02-13 08:13:55.031205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.031951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.032994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.033984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.034995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.035024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.035066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.035102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.035143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.541 [2024-02-13 08:13:55.035176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.035984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.036957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.037954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.038992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.542 [2024-02-13 08:13:55.039515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.039958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.040990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.041977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.042979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.043994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.044037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.543 [2024-02-13 08:13:55.044084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.044976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.045992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.046970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.047957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.544 [2024-02-13 08:13:55.048335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.048969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.049973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.050990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.051962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.545 [2024-02-13 08:13:55.052171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.052975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.053983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.054963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.055982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.546 [2024-02-13 08:13:55.056784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.056827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.056878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.056924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.056974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.057984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.058959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.059986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.060973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.061017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.061057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.061105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.547 [2024-02-13 08:13:55.061153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.061990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.062970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.063988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.064985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.548 [2024-02-13 08:13:55.065829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.065872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.065914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.065956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.065995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.066992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.067999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.068973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.069970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.070014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.070068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.549 [2024-02-13 08:13:55.070114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.550 [2024-02-13 08:13:55.070660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.070970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.071993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.072960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.073991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.074038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.074091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.550 [2024-02-13 08:13:55.074134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.074975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.075996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.076971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.077967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.551 [2024-02-13 08:13:55.078323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.078964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.079980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.080963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.081985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.082975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.083010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.552 [2024-02-13 08:13:55.083040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.083957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.084992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.085995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.086972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.553 [2024-02-13 08:13:55.087404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.087957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.088948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.089983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.090972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.091993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.092038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.092082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.554 [2024-02-13 08:13:55.092133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.092973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.093959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.555 [2024-02-13 08:13:55.094911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.094954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.094986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.095986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.096980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.097985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.098980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.556 [2024-02-13 08:13:55.099291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.099974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.100964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.101987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.102989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.103981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.557 [2024-02-13 08:13:55.104024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.104974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.105960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.106999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.107972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.558 [2024-02-13 08:13:55.108766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.108809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.108852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.108891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.108931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.108977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.109987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.110959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.111976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.112717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.559 [2024-02-13 08:13:55.113557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.113975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.114964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.115762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.560 [2024-02-13 08:13:55.116662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.116978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.560 [2024-02-13 08:13:55.117651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.117980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.118816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.119982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.120997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.121927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.561 [2024-02-13 08:13:55.122605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.122979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.123994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.124850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.125969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.126996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.562 [2024-02-13 08:13:55.127656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.127699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.127750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.127791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.127838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.127886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.128978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.129972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.130900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.131992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.563 [2024-02-13 08:13:55.132953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.132991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.133937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.134975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.135993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.136971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.564 [2024-02-13 08:13:55.137661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.137998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.138970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.139984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.140977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 true 00:13:21.565 [2024-02-13 08:13:55.141449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.141999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.565 [2024-02-13 08:13:55.142800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.142842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.142891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.142936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.142986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.143967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.144972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.145951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.146991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.147975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.566 [2024-02-13 08:13:55.148392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.148965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.149985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.150959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.567 [2024-02-13 08:13:55.151620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.151959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.152996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.153969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.568 [2024-02-13 08:13:55.154795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.154827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.154857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.154901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.154946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.154993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.155991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.156959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.157961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.158973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.569 [2024-02-13 08:13:55.159356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.159963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.160952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.161984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.162995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.570 [2024-02-13 08:13:55.163594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.163949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 08:13:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:21.571 08:13:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.571 [2024-02-13 08:13:55.164299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.571 [2024-02-13 08:13:55.164621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.164987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.165985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.166988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.167981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.168025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.168066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.168114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.571 [2024-02-13 08:13:55.168159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.168963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.169990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.170999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.171963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.572 [2024-02-13 08:13:55.172467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.172986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.173996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.174954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.175978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.573 [2024-02-13 08:13:55.176549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.176971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.177966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.178962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.179995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.574 [2024-02-13 08:13:55.180868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.180911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.180951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.180983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.575 [2024-02-13 08:13:55.181360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.181969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.182981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.183976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.184013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.856 [2024-02-13 08:13:55.184053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.184983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.185969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.186987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.187996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.857 [2024-02-13 08:13:55.188326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.188977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.189960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.190966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.191991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.192969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.858 [2024-02-13 08:13:55.193015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.193978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.194974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.195981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.196964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.859 [2024-02-13 08:13:55.197394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.197959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.198956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.199987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.200960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.860 [2024-02-13 08:13:55.201989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.202966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.203994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.204954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.861 [2024-02-13 08:13:55.205474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.205979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.206982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.207984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.208963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.862 [2024-02-13 08:13:55.209998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.210993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.211999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.863 [2024-02-13 08:13:55.212932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.212979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.213953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.863 [2024-02-13 08:13:55.214229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.214958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.215976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.216982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.217976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.864 [2024-02-13 08:13:55.218799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.218850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.218900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.218945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.218995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.219991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.220965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.221996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.865 [2024-02-13 08:13:55.222717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.222976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.223968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.224984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.225987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.226986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.866 [2024-02-13 08:13:55.227034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.227981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.228996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.229988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.230981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.867 [2024-02-13 08:13:55.231673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.231979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.232973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.233716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.234975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.868 [2024-02-13 08:13:55.235968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.236740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.237958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.238960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.239812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.869 [2024-02-13 08:13:55.240708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.240991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.241973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.242849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.243958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.870 [2024-02-13 08:13:55.244540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.244982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.245855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.246983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.247971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.871 [2024-02-13 08:13:55.248830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.248881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.249983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.250967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.251927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.252956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.872 [2024-02-13 08:13:55.253416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.253959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.254927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.255995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.256962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.873 [2024-02-13 08:13:55.257787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.257829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.257877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.257922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.257967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.258973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.259981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.260968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.874 [2024-02-13 08:13:55.261307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.261960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.874 [2024-02-13 08:13:55.262333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.262992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.263977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.264962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.265981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.266035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.266082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.266129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.266172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.875 [2024-02-13 08:13:55.266215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.266976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.267973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.268961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.269984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.876 [2024-02-13 08:13:55.270870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.270913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.270949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.270979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.271962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.272965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.273964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.274993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.275041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.275088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.275128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.275161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.877 [2024-02-13 08:13:55.275190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.275963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.276997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.277962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.278992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.878 [2024-02-13 08:13:55.279879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.279920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.279952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.279996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.280988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.281994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.282979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.283972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.879 [2024-02-13 08:13:55.284248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.284981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.285989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.286964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.880 [2024-02-13 08:13:55.287936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.287984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.288986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.289998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.290961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.291963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.881 [2024-02-13 08:13:55.292505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.292974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.293975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.294986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.295998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.882 [2024-02-13 08:13:55.296961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.297991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.298979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.299965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.300992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.883 [2024-02-13 08:13:55.301638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.301978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.302998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.303988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.304960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.884 [2024-02-13 08:13:55.305375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.305980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.306958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.885 [2024-02-13 08:13:55.307329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.307971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.308988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.309947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.310003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.885 [2024-02-13 08:13:55.310053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.310970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.311993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.312564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 08:13:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.886 [2024-02-13 08:13:55.509535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.509981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.886 [2024-02-13 08:13:55.510701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.510977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.511991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.512966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.513962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.887 [2024-02-13 08:13:55.514864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.514906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.514960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.515998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.516953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.517960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.518994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.888 [2024-02-13 08:13:55.519474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.519969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.520964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:21.889 [2024-02-13 08:13:55.521344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:21.889 [2024-02-13 08:13:55.521676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.521991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.182 [2024-02-13 08:13:55.522961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.523960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.524985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.525991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.526877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.527227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.527274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.527327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.183 [2024-02-13 08:13:55.527369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.527997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.528981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.529903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.530984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.184 [2024-02-13 08:13:55.531722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.531965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.532994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.533981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.534977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.535987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.536321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.536354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.185 [2024-02-13 08:13:55.536395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.536975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.537964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 08:13:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:22.186 [2024-02-13 08:13:55.538602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.538934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 08:13:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:22.186 [2024-02-13 08:13:55.538977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.539987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.186 [2024-02-13 08:13:55.540716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.540993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.541980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.542958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.543998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.187 [2024-02-13 08:13:55.544267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.544984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.545993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.546991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.547989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.188 [2024-02-13 08:13:55.548786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.548830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.548874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.548918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.548963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.549958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.550975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.551993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.552973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.189 [2024-02-13 08:13:55.553237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.553988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.554964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.555958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.556953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.557962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.190 [2024-02-13 08:13:55.558902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.558948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.558991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.559996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.560995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.561990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.562989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.563020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.563050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.563091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.191 [2024-02-13 08:13:55.563137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.563966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.564982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.565976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.192 [2024-02-13 08:13:55.566882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.566962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.567973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.192 [2024-02-13 08:13:55.568457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.568966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.569976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.570962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.571992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.572996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.193 [2024-02-13 08:13:55.573557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.573979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.574980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.575957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.576966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.577986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.194 [2024-02-13 08:13:55.578890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.578935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.578982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.579984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.580999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.581997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.582968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.195 [2024-02-13 08:13:55.583373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.583999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.584983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.585966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.586949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.587990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.196 [2024-02-13 08:13:55.588722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.588771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.588816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.588867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.588915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.588963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.589963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.590951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.591988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.592964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.593991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.197 [2024-02-13 08:13:55.594021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.594969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.595973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.596980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.597978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.598974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.599020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.198 [2024-02-13 08:13:55.599062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.599716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.600975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.601998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.602755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.199 [2024-02-13 08:13:55.603810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.603853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.603898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.603945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.603993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.604986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.605836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.606961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.607973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.608739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.200 [2024-02-13 08:13:55.609070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.609996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.610970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.611774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.612973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.201 [2024-02-13 08:13:55.613993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.614733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.202 [2024-02-13 08:13:55.615460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.615997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.616970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.617762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.618962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.202 [2024-02-13 08:13:55.619202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.619969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.620790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.621984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.622995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.623858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.203 [2024-02-13 08:13:55.624192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.624992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.625998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.626872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.627979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.628980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.629020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.204 [2024-02-13 08:13:55.629061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.629911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.630994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.631979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.632986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.633981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.205 [2024-02-13 08:13:55.634266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.634968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.635896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.636995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.637962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.638972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.206 [2024-02-13 08:13:55.639586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.639978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.640988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.641941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.642995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.643987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.207 [2024-02-13 08:13:55.644446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.644960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.645969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.646997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.647968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.648991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.208 [2024-02-13 08:13:55.649457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.649972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.650999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.651986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.652995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.653964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.209 [2024-02-13 08:13:55.654771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.654815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.654860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.654907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.654949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.654996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.655992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.656960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.657996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.658965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.210 [2024-02-13 08:13:55.659873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.659919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.659966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.660962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.661993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.662964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.211 [2024-02-13 08:13:55.663818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.663960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.664980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.665018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.665060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.665099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.665142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.211 [2024-02-13 08:13:55.665188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.212 [2024-02-13 08:13:55.665229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.665982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.666993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.667968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.668998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.669973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.213 [2024-02-13 08:13:55.670354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.670970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.671977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.672980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.673988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.674964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.214 [2024-02-13 08:13:55.675528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.675575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.675928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.675980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.676986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.677954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.678726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.679980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.215 [2024-02-13 08:13:55.680865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.680908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.680957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.680999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.681992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.682987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.683994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.684776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.685993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.216 [2024-02-13 08:13:55.686043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.686965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.687742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.688955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.689980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.217 [2024-02-13 08:13:55.690768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.690801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.690831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.690872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.690927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.691980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.692998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.693901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.694990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.695956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.218 [2024-02-13 08:13:55.696223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.696970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.697989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.698985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.699986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.700994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.219 [2024-02-13 08:13:55.701601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.701969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.702993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.703999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 true 00:13:22.220 [2024-02-13 08:13:55.704885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.704995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.705995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.220 [2024-02-13 08:13:55.706800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.706845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.706889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.706935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.706986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.707954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.708966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.709981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.710994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.711035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.711079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.711123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.711161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.221 [2024-02-13 08:13:55.711191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.711974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.222 [2024-02-13 08:13:55.712633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.712997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.713981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.714993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.715985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.222 [2024-02-13 08:13:55.716358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.716970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.717962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.718964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.719993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.720987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.223 [2024-02-13 08:13:55.721769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.721818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.721852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.721881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.721930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.721971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.722993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.723967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.724968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.224 [2024-02-13 08:13:55.725018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.725998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.726969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.727982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.225 [2024-02-13 08:13:55.728579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 08:13:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:22.226 [2024-02-13 08:13:55.728665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.728972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 08:13:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.226 [2024-02-13 08:13:55.729016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.729988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.730992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.731974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.226 [2024-02-13 08:13:55.732750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.732797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.732848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.732890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.732940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.732984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.733991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.734973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.735984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.736979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.227 [2024-02-13 08:13:55.737333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.737992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.738982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.739976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.740983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.228 [2024-02-13 08:13:55.741621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.741961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.742963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.743966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.744979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.745999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.229 [2024-02-13 08:13:55.746328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.746975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.747984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.748983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.749989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.750035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.230 [2024-02-13 08:13:55.750082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.750993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.751738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.752974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.753969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.231 [2024-02-13 08:13:55.754483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.754775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.755971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.756982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.757823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.232 [2024-02-13 08:13:55.758786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.758975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.759005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.759036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.759074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.759121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.232 [2024-02-13 08:13:55.759167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.759967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.760859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.761989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.762961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.233 [2024-02-13 08:13:55.763520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.763923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.764980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.765986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.766890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.767972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.768010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.234 [2024-02-13 08:13:55.768045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.768987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.769897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.770981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.235 [2024-02-13 08:13:55.771859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.771906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.771952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.772994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.773999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.774967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.775914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.236 [2024-02-13 08:13:55.776536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.776999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.777982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.778991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.779983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.237 [2024-02-13 08:13:55.780819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.780865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.780915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.780958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.781987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.782998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.783988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.784977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.238 [2024-02-13 08:13:55.785661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.785979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.786977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.787993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.788975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.239 [2024-02-13 08:13:55.789425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.789983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.790966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.791986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.792972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.240 [2024-02-13 08:13:55.793690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.793965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.794967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.795965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.796979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.797975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.241 [2024-02-13 08:13:55.798299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.798990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.799978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.800998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.801986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.242 [2024-02-13 08:13:55.802627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.802997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.803997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.804977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.805996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.243 [2024-02-13 08:13:55.806778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.806962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.807009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.807057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.807107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.243 [2024-02-13 08:13:55.807154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.807975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.808990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.809981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.244 [2024-02-13 08:13:55.810847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.810894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.810944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.810987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.811978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.812990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.813966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.814969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.245 [2024-02-13 08:13:55.815231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.815984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.816975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.817999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.818979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.246 [2024-02-13 08:13:55.819900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.819945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.819990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.820969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.821965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.822969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.823992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.824035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.824082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.824126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.247 [2024-02-13 08:13:55.824171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.824968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.825954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.826983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.827976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.828994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.248 [2024-02-13 08:13:55.829037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.829978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.830975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.831971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.249 [2024-02-13 08:13:55.832786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.832831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.832880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.832925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.832975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.833678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.834994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.835036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.835084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.835132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.835167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.250 [2024-02-13 08:13:55.835196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.534 [2024-02-13 08:13:55.835625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.835961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.836810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.837969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.838974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.839975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.535 [2024-02-13 08:13:55.840627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.840968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.841991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.842970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.843979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.844988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.536 [2024-02-13 08:13:55.845021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.845985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.846979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.847979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.848997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.537 [2024-02-13 08:13:55.849446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.849490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.849879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.849931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.849976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.850988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.851970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.852686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.853992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.538 [2024-02-13 08:13:55.854228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.854993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.855786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:22.539 [2024-02-13 08:13:55.856375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.856991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.539 [2024-02-13 08:13:55.857729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.857997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.858944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.859968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.860991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.861955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.862287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.862336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.540 [2024-02-13 08:13:55.862387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.862999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.863989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.864966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.865972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.541 [2024-02-13 08:13:55.866788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.866833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.866887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.866932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.866982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.867990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.868967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.869964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.870957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.542 [2024-02-13 08:13:55.871665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.871979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.872994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.873976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.874994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.875990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.543 [2024-02-13 08:13:55.876031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.876956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.877984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.878967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.544 [2024-02-13 08:13:55.879852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.879902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.879946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:22.545 [2024-02-13 08:13:55.880788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 08:13:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.481 08:13:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:23.481 08:13:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:23.740 true 00:13:23.740 08:13:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:23.740 08:13:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.677 08:13:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.677 08:13:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:24.677 08:13:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:24.936 true 00:13:24.936 08:13:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:24.936 08:13:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.195 08:13:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.195 08:13:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:25.195 08:13:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:25.454 true 00:13:25.454 08:13:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:25.454 08:13:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 08:14:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.834 08:14:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:26.834 08:14:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:26.834 true 00:13:26.834 08:14:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:26.834 08:14:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.772 08:14:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.032 08:14:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:28.032 08:14:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:28.032 true 00:13:28.290 08:14:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:28.290 08:14:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.290 08:14:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.549 08:14:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:28.549 08:14:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:28.549 true 00:13:28.808 08:14:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:28.808 08:14:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.808 08:14:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.067 08:14:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:29.067 08:14:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:29.067 true 00:13:29.326 08:14:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:29.326 08:14:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.326 08:14:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.585 08:14:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:29.585 08:14:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:29.844 true 00:13:29.844 08:14:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:29.844 08:14:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 08:14:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.223 08:14:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:31.223 08:14:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:31.223 true 00:13:31.223 08:14:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:31.223 08:14:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.161 08:14:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.421 08:14:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:32.421 08:14:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:32.421 true 00:13:32.421 08:14:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:32.421 08:14:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.680 08:14:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.940 08:14:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:13:32.940 08:14:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:32.940 true 00:13:32.940 08:14:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:32.940 08:14:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.319 08:14:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.579 08:14:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:13:34.579 08:14:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:34.579 true 00:13:34.579 08:14:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:34.579 08:14:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.515 08:14:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.774 08:14:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:13:35.774 08:14:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:35.774 true 00:13:35.774 08:14:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:35.774 08:14:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.033 08:14:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.293 08:14:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:13:36.293 08:14:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:36.293 true 00:13:36.293 08:14:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:36.293 08:14:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 08:14:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.706 08:14:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:37.706 08:14:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:37.965 true 00:13:37.965 08:14:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:37.965 08:14:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.903 08:14:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.903 08:14:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:38.903 08:14:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:39.161 true 00:13:39.161 08:14:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:39.161 08:14:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.420 08:14:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.420 08:14:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:39.420 08:14:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:39.679 true 00:13:39.679 08:14:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:39.679 08:14:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.939 08:14:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.939 08:14:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:39.939 08:14:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:40.198 true 00:13:40.198 08:14:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:40.198 08:14:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.456 08:14:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.456 08:14:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:40.456 08:14:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:40.715 true 00:13:40.715 08:14:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:40.715 08:14:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 08:14:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.095 08:14:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:13:42.095 08:14:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:42.354 true 00:13:42.354 08:14:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:42.354 08:14:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.290 08:14:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.290 08:14:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:13:43.290 08:14:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:43.548 true 00:13:43.548 08:14:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:43.548 08:14:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.548 08:14:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.806 08:14:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:13:43.806 08:14:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:44.064 true 00:13:44.064 08:14:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:44.064 08:14:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.064 08:14:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.323 08:14:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:13:44.323 08:14:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:44.582 true 00:13:44.582 08:14:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:44.582 08:14:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.517 08:14:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.517 08:14:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:13:45.517 08:14:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:45.776 true 00:13:45.776 08:14:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:45.776 08:14:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.776 08:14:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.034 08:14:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:13:46.034 08:14:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:46.293 true 00:13:46.293 08:14:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:46.293 08:14:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.552 08:14:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.552 08:14:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:13:46.552 08:14:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:46.810 true 00:13:46.810 08:14:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:46.810 08:14:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.070 08:14:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.070 08:14:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:13:47.070 08:14:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:47.329 Initializing NVMe Controllers 00:13:47.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:47.329 Controller IO queue size 128, less than required. 00:13:47.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.329 Controller IO queue size 128, less than required. 00:13:47.329 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:47.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:47.329 Initialization complete. Launching workers. 00:13:47.329 ======================================================== 00:13:47.329 Latency(us) 00:13:47.329 Device Information : IOPS MiB/s Average min max 00:13:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2457.81 1.20 27894.37 1289.09 1052219.69 00:13:47.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14905.25 7.28 8565.66 1929.85 297267.31 00:13:47.329 ======================================================== 00:13:47.329 Total : 17363.06 8.48 11301.72 1289.09 1052219.69 00:13:47.329 00:13:47.329 true 00:13:47.329 08:14:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2188796 00:13:47.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2188796) - No such process 00:13:47.329 08:14:20 -- target/ns_hotplug_stress.sh@44 -- # wait 2188796 00:13:47.329 08:14:20 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:47.329 08:14:20 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:47.329 08:14:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.329 08:14:20 -- nvmf/common.sh@116 -- # sync 00:13:47.329 08:14:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.329 08:14:20 -- nvmf/common.sh@119 -- # set +e 00:13:47.329 08:14:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.329 08:14:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.329 rmmod nvme_tcp 00:13:47.329 rmmod nvme_fabrics 00:13:47.329 rmmod nvme_keyring 00:13:47.329 08:14:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.329 08:14:20 -- nvmf/common.sh@123 -- # set -e 00:13:47.329 08:14:20 -- nvmf/common.sh@124 -- # return 0 00:13:47.329 08:14:20 -- nvmf/common.sh@477 -- # '[' -n 2188457 ']' 00:13:47.329 08:14:20 -- nvmf/common.sh@478 -- # killprocess 2188457 00:13:47.329 08:14:20 -- common/autotest_common.sh@924 -- # '[' -z 2188457 ']' 00:13:47.329 08:14:20 -- common/autotest_common.sh@928 -- # kill -0 2188457 00:13:47.329 08:14:20 -- common/autotest_common.sh@929 -- # uname 00:13:47.329 08:14:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:47.329 08:14:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2188457 00:13:47.589 08:14:21 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:47.589 08:14:21 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:47.589 08:14:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2188457' 00:13:47.589 killing process with pid 2188457 00:13:47.589 08:14:21 -- common/autotest_common.sh@943 -- # kill 2188457 00:13:47.589 08:14:21 -- common/autotest_common.sh@948 -- # wait 2188457 00:13:47.589 08:14:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.589 08:14:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.589 08:14:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.589 08:14:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.589 08:14:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.589 08:14:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.589 08:14:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.589 08:14:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.128 08:14:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:50.128 00:13:50.128 real 0m41.320s 00:13:50.128 user 2m28.460s 00:13:50.128 sys 0m10.798s 00:13:50.128 08:14:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:50.128 08:14:23 -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 ************************************ 00:13:50.128 END TEST nvmf_ns_hotplug_stress 00:13:50.128 ************************************ 00:13:50.128 08:14:23 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.128 08:14:23 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:50.128 08:14:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:50.128 08:14:23 -- common/autotest_common.sh@10 -- # set +x 00:13:50.128 ************************************ 00:13:50.128 START TEST nvmf_connect_stress 00:13:50.128 ************************************ 00:13:50.128 08:14:23 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:50.128 * Looking for test storage... 00:13:50.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.128 08:14:23 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.128 08:14:23 -- nvmf/common.sh@7 -- # uname -s 00:13:50.128 08:14:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.128 08:14:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.128 08:14:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.128 08:14:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.128 08:14:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.128 08:14:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.128 08:14:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.128 08:14:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.128 08:14:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.128 08:14:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.128 08:14:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:50.128 08:14:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:50.128 08:14:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.128 08:14:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.128 08:14:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.128 08:14:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.128 08:14:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.128 08:14:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.128 08:14:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.128 08:14:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.128 08:14:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.129 08:14:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.129 08:14:23 -- paths/export.sh@5 -- # export PATH 00:13:50.129 08:14:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.129 08:14:23 -- nvmf/common.sh@46 -- # : 0 00:13:50.129 08:14:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.129 08:14:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.129 08:14:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.129 08:14:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.129 08:14:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.129 08:14:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.129 08:14:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.129 08:14:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.129 08:14:23 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:50.129 08:14:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.129 08:14:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.129 08:14:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.129 08:14:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.129 08:14:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.129 08:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.129 08:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.129 08:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.129 08:14:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:50.129 08:14:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:50.129 08:14:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:50.129 08:14:23 -- common/autotest_common.sh@10 -- # set +x 00:13:55.408 08:14:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:55.408 08:14:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:55.408 08:14:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:55.408 08:14:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:55.408 08:14:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:55.408 08:14:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:55.408 08:14:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:55.408 08:14:29 -- nvmf/common.sh@294 -- # net_devs=() 00:13:55.408 08:14:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:55.408 08:14:29 -- nvmf/common.sh@295 -- # e810=() 00:13:55.408 08:14:29 -- nvmf/common.sh@295 -- # local -ga e810 00:13:55.408 08:14:29 -- nvmf/common.sh@296 -- # x722=() 00:13:55.408 08:14:29 -- nvmf/common.sh@296 -- # local -ga x722 00:13:55.408 08:14:29 -- nvmf/common.sh@297 -- # mlx=() 00:13:55.408 08:14:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:55.408 08:14:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.408 08:14:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.408 08:14:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.409 08:14:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:55.409 08:14:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:55.409 08:14:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.409 08:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:55.409 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:55.409 08:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:55.409 08:14:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:55.409 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:55.409 08:14:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.409 08:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.409 08:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.409 08:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:55.409 Found net devices under 0000:af:00.0: cvl_0_0 00:13:55.409 08:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.409 08:14:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:55.409 08:14:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.409 08:14:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.409 08:14:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:55.409 Found net devices under 0000:af:00.1: cvl_0_1 00:13:55.409 08:14:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.409 08:14:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:55.409 08:14:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:55.409 08:14:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:55.409 08:14:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.409 08:14:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.409 08:14:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.409 08:14:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:55.409 08:14:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.409 08:14:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.409 08:14:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:55.409 08:14:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.409 08:14:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.409 08:14:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:55.409 08:14:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:55.409 08:14:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.409 08:14:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.669 08:14:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.669 08:14:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.669 08:14:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:55.669 08:14:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.669 08:14:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.669 08:14:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.669 08:14:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:55.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:55.669 00:13:55.669 --- 10.0.0.2 ping statistics --- 00:13:55.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.669 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:55.669 08:14:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:13:55.669 00:13:55.669 --- 10.0.0.1 ping statistics --- 00:13:55.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.669 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:13:55.669 08:14:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.669 08:14:29 -- nvmf/common.sh@410 -- # return 0 00:13:55.669 08:14:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:55.669 08:14:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.669 08:14:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:55.669 08:14:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:55.669 08:14:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.669 08:14:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:55.669 08:14:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:55.929 08:14:29 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:55.929 08:14:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:55.929 08:14:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:55.929 08:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:55.929 08:14:29 -- nvmf/common.sh@469 -- # nvmfpid=2197907 00:13:55.929 08:14:29 -- nvmf/common.sh@470 -- # waitforlisten 2197907 00:13:55.929 08:14:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:55.929 08:14:29 -- common/autotest_common.sh@817 -- # '[' -z 2197907 ']' 00:13:55.929 08:14:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.929 08:14:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.929 08:14:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.929 08:14:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.929 08:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:55.929 [2024-02-13 08:14:29.409422] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:55.929 [2024-02-13 08:14:29.409466] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.929 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.929 [2024-02-13 08:14:29.473388] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.929 [2024-02-13 08:14:29.549988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:55.929 [2024-02-13 08:14:29.550089] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.929 [2024-02-13 08:14:29.550096] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.929 [2024-02-13 08:14:29.550102] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.929 [2024-02-13 08:14:29.550207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.929 [2024-02-13 08:14:29.550292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.929 [2024-02-13 08:14:29.550293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.867 08:14:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.867 08:14:30 -- common/autotest_common.sh@850 -- # return 0 00:13:56.867 08:14:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:56.867 08:14:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 08:14:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.867 08:14:30 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.867 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 [2024-02-13 08:14:30.241267] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.867 08:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.867 08:14:30 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.867 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 08:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.867 08:14:30 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.867 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 [2024-02-13 08:14:30.271751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.867 08:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.867 08:14:30 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.867 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:56.867 NULL1 00:13:56.867 08:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.867 08:14:30 -- target/connect_stress.sh@21 -- # PERF_PID=2198001 00:13:56.867 08:14:30 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.867 08:14:30 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:56.867 08:14:30 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.867 08:14:30 -- target/connect_stress.sh@28 -- # cat 00:13:56.867 08:14:30 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:56.867 08:14:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.867 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.867 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.127 08:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.127 08:14:30 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:57.127 08:14:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.127 08:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.127 08:14:30 -- common/autotest_common.sh@10 -- # set +x 00:13:57.386 08:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.386 08:14:31 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:57.386 08:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.386 08:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.386 08:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:57.954 08:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.954 08:14:31 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:57.954 08:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.954 08:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.954 08:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:58.215 08:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.215 08:14:31 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:58.215 08:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.215 08:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.215 08:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:58.474 08:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.474 08:14:31 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:58.474 08:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.474 08:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.474 08:14:31 -- common/autotest_common.sh@10 -- # set +x 00:13:58.733 08:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.733 08:14:32 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:58.733 08:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.733 08:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.733 08:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:58.992 08:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:58.992 08:14:32 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:58.992 08:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.992 08:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:58.992 08:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.561 08:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.561 08:14:32 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:59.561 08:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.561 08:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.561 08:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.830 08:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:59.830 08:14:33 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:13:59.830 08:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.830 08:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:59.830 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.138 08:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.138 08:14:33 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:00.138 08:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.138 08:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.138 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.397 08:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.397 08:14:33 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:00.397 08:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.397 08:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.397 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.657 08:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.657 08:14:34 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:00.657 08:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.657 08:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.657 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:00.916 08:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:00.916 08:14:34 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:00.916 08:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.916 08:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:00.916 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.485 08:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.485 08:14:34 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:01.485 08:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.485 08:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.485 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:01.744 08:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.744 08:14:35 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:01.744 08:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.744 08:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.744 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:02.003 08:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.003 08:14:35 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:02.003 08:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.003 08:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.003 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:02.262 08:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.262 08:14:35 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:02.262 08:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.262 08:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.262 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:02.521 08:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:02.521 08:14:36 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:02.521 08:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.521 08:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:02.521 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:03.087 08:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.087 08:14:36 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:03.087 08:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.087 08:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.087 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:03.345 08:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.345 08:14:36 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:03.345 08:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.345 08:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.345 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:03.605 08:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.605 08:14:37 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:03.605 08:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.605 08:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.605 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:03.864 08:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:03.864 08:14:37 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:03.864 08:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.864 08:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:03.864 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:04.431 08:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.431 08:14:37 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:04.431 08:14:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.431 08:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.431 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:04.689 08:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.689 08:14:38 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:04.689 08:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.689 08:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.690 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 08:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.948 08:14:38 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:04.948 08:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.948 08:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.948 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:05.207 08:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.207 08:14:38 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:05.207 08:14:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.207 08:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.207 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:05.465 08:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.465 08:14:39 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:05.465 08:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.465 08:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.465 08:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:06.032 08:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.032 08:14:39 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:06.032 08:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.032 08:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.032 08:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:06.291 08:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.291 08:14:39 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:06.291 08:14:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.291 08:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.291 08:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:06.550 08:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.550 08:14:40 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:06.550 08:14:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.550 08:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.550 08:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.808 08:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.808 08:14:40 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:06.808 08:14:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.809 08:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.809 08:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:06.809 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.066 08:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.066 08:14:40 -- target/connect_stress.sh@34 -- # kill -0 2198001 00:14:07.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2198001) - No such process 00:14:07.066 08:14:40 -- target/connect_stress.sh@38 -- # wait 2198001 00:14:07.066 08:14:40 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:07.066 08:14:40 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:07.066 08:14:40 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:07.066 08:14:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:07.066 08:14:40 -- nvmf/common.sh@116 -- # sync 00:14:07.066 08:14:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:07.066 08:14:40 -- nvmf/common.sh@119 -- # set +e 00:14:07.066 08:14:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:07.066 08:14:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:07.066 rmmod nvme_tcp 00:14:07.066 rmmod nvme_fabrics 00:14:07.066 rmmod nvme_keyring 00:14:07.325 08:14:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:07.325 08:14:40 -- nvmf/common.sh@123 -- # set -e 00:14:07.325 08:14:40 -- nvmf/common.sh@124 -- # return 0 00:14:07.325 08:14:40 -- nvmf/common.sh@477 -- # '[' -n 2197907 ']' 00:14:07.325 08:14:40 -- nvmf/common.sh@478 -- # killprocess 2197907 00:14:07.325 08:14:40 -- common/autotest_common.sh@924 -- # '[' -z 2197907 ']' 00:14:07.325 08:14:40 -- common/autotest_common.sh@928 -- # kill -0 2197907 00:14:07.325 08:14:40 -- common/autotest_common.sh@929 -- # uname 00:14:07.325 08:14:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:07.325 08:14:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2197907 00:14:07.325 08:14:40 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:07.325 08:14:40 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:07.325 08:14:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2197907' 00:14:07.325 killing process with pid 2197907 00:14:07.325 08:14:40 -- common/autotest_common.sh@943 -- # kill 2197907 00:14:07.325 08:14:40 -- common/autotest_common.sh@948 -- # wait 2197907 00:14:07.584 08:14:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.584 08:14:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.584 08:14:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.584 08:14:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.584 08:14:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.584 08:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.584 08:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.584 08:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.493 08:14:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:09.493 00:14:09.493 real 0m19.737s 00:14:09.493 user 0m41.993s 00:14:09.493 sys 0m8.314s 00:14:09.493 08:14:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.493 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.493 ************************************ 00:14:09.493 END TEST nvmf_connect_stress 00:14:09.493 ************************************ 00:14:09.493 08:14:43 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.493 08:14:43 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:09.493 08:14:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:09.493 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.493 ************************************ 00:14:09.493 START TEST nvmf_fused_ordering 00:14:09.493 ************************************ 00:14:09.493 08:14:43 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:09.753 * Looking for test storage... 00:14:09.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.753 08:14:43 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.753 08:14:43 -- nvmf/common.sh@7 -- # uname -s 00:14:09.753 08:14:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.753 08:14:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.753 08:14:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.753 08:14:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.753 08:14:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.753 08:14:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.753 08:14:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.753 08:14:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.753 08:14:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.753 08:14:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.753 08:14:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:09.753 08:14:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:09.753 08:14:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.753 08:14:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.753 08:14:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.753 08:14:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.753 08:14:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.753 08:14:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.753 08:14:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.753 08:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.753 08:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.754 08:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.754 08:14:43 -- paths/export.sh@5 -- # export PATH 00:14:09.754 08:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.754 08:14:43 -- nvmf/common.sh@46 -- # : 0 00:14:09.754 08:14:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.754 08:14:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.754 08:14:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.754 08:14:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.754 08:14:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.754 08:14:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.754 08:14:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.754 08:14:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.754 08:14:43 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:09.754 08:14:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.754 08:14:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.754 08:14:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.754 08:14:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.754 08:14:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.754 08:14:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.754 08:14:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.754 08:14:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.754 08:14:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:09.754 08:14:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:09.754 08:14:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:09.754 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:16.324 08:14:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:16.325 08:14:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:16.325 08:14:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:16.325 08:14:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:16.325 08:14:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:16.325 08:14:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:16.325 08:14:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:16.325 08:14:48 -- nvmf/common.sh@294 -- # net_devs=() 00:14:16.325 08:14:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:16.325 08:14:48 -- nvmf/common.sh@295 -- # e810=() 00:14:16.325 08:14:48 -- nvmf/common.sh@295 -- # local -ga e810 00:14:16.325 08:14:48 -- nvmf/common.sh@296 -- # x722=() 00:14:16.325 08:14:48 -- nvmf/common.sh@296 -- # local -ga x722 00:14:16.325 08:14:48 -- nvmf/common.sh@297 -- # mlx=() 00:14:16.325 08:14:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:16.325 08:14:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.325 08:14:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:16.325 08:14:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:16.325 08:14:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.325 08:14:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:16.325 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:16.325 08:14:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.325 08:14:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:16.325 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:16.325 08:14:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.325 08:14:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.325 08:14:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.325 08:14:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:16.325 Found net devices under 0000:af:00.0: cvl_0_0 00:14:16.325 08:14:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.325 08:14:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.325 08:14:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.325 08:14:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.325 08:14:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:16.325 Found net devices under 0000:af:00.1: cvl_0_1 00:14:16.325 08:14:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.325 08:14:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:16.325 08:14:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:16.325 08:14:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:16.325 08:14:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.325 08:14:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.325 08:14:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.325 08:14:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:16.325 08:14:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.325 08:14:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.325 08:14:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:16.325 08:14:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.325 08:14:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.325 08:14:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:16.325 08:14:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:16.325 08:14:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.325 08:14:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.325 08:14:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.325 08:14:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.325 08:14:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:16.325 08:14:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.325 08:14:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.325 08:14:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.325 08:14:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:16.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:14:16.325 00:14:16.325 --- 10.0.0.2 ping statistics --- 00:14:16.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.325 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:16.325 08:14:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:16.325 00:14:16.325 --- 10.0.0.1 ping statistics --- 00:14:16.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.325 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:16.325 08:14:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.325 08:14:49 -- nvmf/common.sh@410 -- # return 0 00:14:16.325 08:14:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:16.325 08:14:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.325 08:14:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:16.325 08:14:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:16.325 08:14:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.325 08:14:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:16.325 08:14:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:16.325 08:14:49 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:16.325 08:14:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:16.325 08:14:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:16.325 08:14:49 -- common/autotest_common.sh@10 -- # set +x 00:14:16.325 08:14:49 -- nvmf/common.sh@469 -- # nvmfpid=2203651 00:14:16.325 08:14:49 -- nvmf/common.sh@470 -- # waitforlisten 2203651 00:14:16.325 08:14:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.325 08:14:49 -- common/autotest_common.sh@817 -- # '[' -z 2203651 ']' 00:14:16.325 08:14:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.325 08:14:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:16.325 08:14:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.325 08:14:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:16.325 08:14:49 -- common/autotest_common.sh@10 -- # set +x 00:14:16.325 [2024-02-13 08:14:49.311671] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:16.325 [2024-02-13 08:14:49.311710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.325 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.325 [2024-02-13 08:14:49.373892] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.325 [2024-02-13 08:14:49.441829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.325 [2024-02-13 08:14:49.441941] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.325 [2024-02-13 08:14:49.441948] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.325 [2024-02-13 08:14:49.441955] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.325 [2024-02-13 08:14:49.441973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.585 08:14:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:16.585 08:14:50 -- common/autotest_common.sh@850 -- # return 0 00:14:16.585 08:14:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:16.585 08:14:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 08:14:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.585 08:14:50 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 [2024-02-13 08:14:50.139492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 [2024-02-13 08:14:50.155642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 NULL1 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:16.585 08:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.585 08:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.585 08:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.585 08:14:50 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:16.585 [2024-02-13 08:14:50.199172] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:16.585 [2024-02-13 08:14:50.199204] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203818 ] 00:14:16.585 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.154 Attached to nqn.2016-06.io.spdk:cnode1 00:14:17.154 Namespace ID: 1 size: 1GB 00:14:17.154 fused_ordering(0) 00:14:17.154 fused_ordering(1) 00:14:17.154 fused_ordering(2) 00:14:17.154 fused_ordering(3) 00:14:17.154 fused_ordering(4) 00:14:17.154 fused_ordering(5) 00:14:17.154 fused_ordering(6) 00:14:17.154 fused_ordering(7) 00:14:17.154 fused_ordering(8) 00:14:17.154 fused_ordering(9) 00:14:17.154 fused_ordering(10) 00:14:17.154 fused_ordering(11) 00:14:17.154 fused_ordering(12) 00:14:17.154 fused_ordering(13) 00:14:17.154 fused_ordering(14) 00:14:17.154 fused_ordering(15) 00:14:17.154 fused_ordering(16) 00:14:17.154 fused_ordering(17) 00:14:17.154 fused_ordering(18) 00:14:17.154 fused_ordering(19) 00:14:17.154 fused_ordering(20) 00:14:17.154 fused_ordering(21) 00:14:17.154 fused_ordering(22) 00:14:17.154 fused_ordering(23) 00:14:17.154 fused_ordering(24) 00:14:17.154 fused_ordering(25) 00:14:17.154 fused_ordering(26) 00:14:17.155 fused_ordering(27) 00:14:17.155 fused_ordering(28) 00:14:17.155 fused_ordering(29) 00:14:17.155 fused_ordering(30) 00:14:17.155 fused_ordering(31) 00:14:17.155 fused_ordering(32) 00:14:17.155 fused_ordering(33) 00:14:17.155 fused_ordering(34) 00:14:17.155 fused_ordering(35) 00:14:17.155 fused_ordering(36) 00:14:17.155 fused_ordering(37) 00:14:17.155 fused_ordering(38) 00:14:17.155 fused_ordering(39) 00:14:17.155 fused_ordering(40) 00:14:17.155 fused_ordering(41) 00:14:17.155 fused_ordering(42) 00:14:17.155 fused_ordering(43) 00:14:17.155 fused_ordering(44) 00:14:17.155 fused_ordering(45) 00:14:17.155 fused_ordering(46) 00:14:17.155 fused_ordering(47) 00:14:17.155 fused_ordering(48) 00:14:17.155 fused_ordering(49) 00:14:17.155 fused_ordering(50) 00:14:17.155 fused_ordering(51) 00:14:17.155 fused_ordering(52) 00:14:17.155 fused_ordering(53) 00:14:17.155 fused_ordering(54) 00:14:17.155 fused_ordering(55) 00:14:17.155 fused_ordering(56) 00:14:17.155 fused_ordering(57) 00:14:17.155 fused_ordering(58) 00:14:17.155 fused_ordering(59) 00:14:17.155 fused_ordering(60) 00:14:17.155 fused_ordering(61) 00:14:17.155 fused_ordering(62) 00:14:17.155 fused_ordering(63) 00:14:17.155 fused_ordering(64) 00:14:17.155 fused_ordering(65) 00:14:17.155 fused_ordering(66) 00:14:17.155 fused_ordering(67) 00:14:17.155 fused_ordering(68) 00:14:17.155 fused_ordering(69) 00:14:17.155 fused_ordering(70) 00:14:17.155 fused_ordering(71) 00:14:17.155 fused_ordering(72) 00:14:17.155 fused_ordering(73) 00:14:17.155 fused_ordering(74) 00:14:17.155 fused_ordering(75) 00:14:17.155 fused_ordering(76) 00:14:17.155 fused_ordering(77) 00:14:17.155 fused_ordering(78) 00:14:17.155 fused_ordering(79) 00:14:17.155 fused_ordering(80) 00:14:17.155 fused_ordering(81) 00:14:17.155 fused_ordering(82) 00:14:17.155 fused_ordering(83) 00:14:17.155 fused_ordering(84) 00:14:17.155 fused_ordering(85) 00:14:17.155 fused_ordering(86) 00:14:17.155 fused_ordering(87) 00:14:17.155 fused_ordering(88) 00:14:17.155 fused_ordering(89) 00:14:17.155 fused_ordering(90) 00:14:17.155 fused_ordering(91) 00:14:17.155 fused_ordering(92) 00:14:17.155 fused_ordering(93) 00:14:17.155 fused_ordering(94) 00:14:17.155 fused_ordering(95) 00:14:17.155 fused_ordering(96) 00:14:17.155 fused_ordering(97) 00:14:17.155 fused_ordering(98) 00:14:17.155 fused_ordering(99) 00:14:17.155 fused_ordering(100) 00:14:17.155 fused_ordering(101) 00:14:17.155 fused_ordering(102) 00:14:17.155 fused_ordering(103) 00:14:17.155 fused_ordering(104) 00:14:17.155 fused_ordering(105) 00:14:17.155 fused_ordering(106) 00:14:17.155 fused_ordering(107) 00:14:17.155 fused_ordering(108) 00:14:17.155 fused_ordering(109) 00:14:17.155 fused_ordering(110) 00:14:17.155 fused_ordering(111) 00:14:17.155 fused_ordering(112) 00:14:17.155 fused_ordering(113) 00:14:17.155 fused_ordering(114) 00:14:17.155 fused_ordering(115) 00:14:17.155 fused_ordering(116) 00:14:17.155 fused_ordering(117) 00:14:17.155 fused_ordering(118) 00:14:17.155 fused_ordering(119) 00:14:17.155 fused_ordering(120) 00:14:17.155 fused_ordering(121) 00:14:17.155 fused_ordering(122) 00:14:17.155 fused_ordering(123) 00:14:17.155 fused_ordering(124) 00:14:17.155 fused_ordering(125) 00:14:17.155 fused_ordering(126) 00:14:17.155 fused_ordering(127) 00:14:17.155 fused_ordering(128) 00:14:17.155 fused_ordering(129) 00:14:17.155 fused_ordering(130) 00:14:17.155 fused_ordering(131) 00:14:17.155 fused_ordering(132) 00:14:17.155 fused_ordering(133) 00:14:17.155 fused_ordering(134) 00:14:17.155 fused_ordering(135) 00:14:17.155 fused_ordering(136) 00:14:17.155 fused_ordering(137) 00:14:17.155 fused_ordering(138) 00:14:17.155 fused_ordering(139) 00:14:17.155 fused_ordering(140) 00:14:17.155 fused_ordering(141) 00:14:17.155 fused_ordering(142) 00:14:17.155 fused_ordering(143) 00:14:17.155 fused_ordering(144) 00:14:17.155 fused_ordering(145) 00:14:17.155 fused_ordering(146) 00:14:17.155 fused_ordering(147) 00:14:17.155 fused_ordering(148) 00:14:17.155 fused_ordering(149) 00:14:17.155 fused_ordering(150) 00:14:17.155 fused_ordering(151) 00:14:17.155 fused_ordering(152) 00:14:17.155 fused_ordering(153) 00:14:17.155 fused_ordering(154) 00:14:17.155 fused_ordering(155) 00:14:17.155 fused_ordering(156) 00:14:17.155 fused_ordering(157) 00:14:17.155 fused_ordering(158) 00:14:17.155 fused_ordering(159) 00:14:17.155 fused_ordering(160) 00:14:17.155 fused_ordering(161) 00:14:17.155 fused_ordering(162) 00:14:17.155 fused_ordering(163) 00:14:17.155 fused_ordering(164) 00:14:17.155 fused_ordering(165) 00:14:17.155 fused_ordering(166) 00:14:17.155 fused_ordering(167) 00:14:17.155 fused_ordering(168) 00:14:17.155 fused_ordering(169) 00:14:17.155 fused_ordering(170) 00:14:17.155 fused_ordering(171) 00:14:17.155 fused_ordering(172) 00:14:17.155 fused_ordering(173) 00:14:17.155 fused_ordering(174) 00:14:17.155 fused_ordering(175) 00:14:17.155 fused_ordering(176) 00:14:17.155 fused_ordering(177) 00:14:17.155 fused_ordering(178) 00:14:17.155 fused_ordering(179) 00:14:17.155 fused_ordering(180) 00:14:17.155 fused_ordering(181) 00:14:17.155 fused_ordering(182) 00:14:17.155 fused_ordering(183) 00:14:17.155 fused_ordering(184) 00:14:17.155 fused_ordering(185) 00:14:17.155 fused_ordering(186) 00:14:17.155 fused_ordering(187) 00:14:17.155 fused_ordering(188) 00:14:17.155 fused_ordering(189) 00:14:17.155 fused_ordering(190) 00:14:17.155 fused_ordering(191) 00:14:17.155 fused_ordering(192) 00:14:17.155 fused_ordering(193) 00:14:17.155 fused_ordering(194) 00:14:17.155 fused_ordering(195) 00:14:17.155 fused_ordering(196) 00:14:17.155 fused_ordering(197) 00:14:17.155 fused_ordering(198) 00:14:17.155 fused_ordering(199) 00:14:17.155 fused_ordering(200) 00:14:17.155 fused_ordering(201) 00:14:17.155 fused_ordering(202) 00:14:17.155 fused_ordering(203) 00:14:17.155 fused_ordering(204) 00:14:17.155 fused_ordering(205) 00:14:17.725 fused_ordering(206) 00:14:17.725 fused_ordering(207) 00:14:17.725 fused_ordering(208) 00:14:17.725 fused_ordering(209) 00:14:17.725 fused_ordering(210) 00:14:17.725 fused_ordering(211) 00:14:17.725 fused_ordering(212) 00:14:17.725 fused_ordering(213) 00:14:17.725 fused_ordering(214) 00:14:17.725 fused_ordering(215) 00:14:17.725 fused_ordering(216) 00:14:17.725 fused_ordering(217) 00:14:17.725 fused_ordering(218) 00:14:17.725 fused_ordering(219) 00:14:17.725 fused_ordering(220) 00:14:17.725 fused_ordering(221) 00:14:17.725 fused_ordering(222) 00:14:17.725 fused_ordering(223) 00:14:17.725 fused_ordering(224) 00:14:17.725 fused_ordering(225) 00:14:17.725 fused_ordering(226) 00:14:17.725 fused_ordering(227) 00:14:17.725 fused_ordering(228) 00:14:17.725 fused_ordering(229) 00:14:17.725 fused_ordering(230) 00:14:17.725 fused_ordering(231) 00:14:17.725 fused_ordering(232) 00:14:17.725 fused_ordering(233) 00:14:17.725 fused_ordering(234) 00:14:17.725 fused_ordering(235) 00:14:17.725 fused_ordering(236) 00:14:17.725 fused_ordering(237) 00:14:17.725 fused_ordering(238) 00:14:17.725 fused_ordering(239) 00:14:17.725 fused_ordering(240) 00:14:17.725 fused_ordering(241) 00:14:17.725 fused_ordering(242) 00:14:17.725 fused_ordering(243) 00:14:17.725 fused_ordering(244) 00:14:17.725 fused_ordering(245) 00:14:17.725 fused_ordering(246) 00:14:17.725 fused_ordering(247) 00:14:17.725 fused_ordering(248) 00:14:17.725 fused_ordering(249) 00:14:17.725 fused_ordering(250) 00:14:17.725 fused_ordering(251) 00:14:17.725 fused_ordering(252) 00:14:17.725 fused_ordering(253) 00:14:17.725 fused_ordering(254) 00:14:17.725 fused_ordering(255) 00:14:17.725 fused_ordering(256) 00:14:17.725 fused_ordering(257) 00:14:17.725 fused_ordering(258) 00:14:17.725 fused_ordering(259) 00:14:17.725 fused_ordering(260) 00:14:17.725 fused_ordering(261) 00:14:17.725 fused_ordering(262) 00:14:17.725 fused_ordering(263) 00:14:17.725 fused_ordering(264) 00:14:17.725 fused_ordering(265) 00:14:17.725 fused_ordering(266) 00:14:17.725 fused_ordering(267) 00:14:17.725 fused_ordering(268) 00:14:17.725 fused_ordering(269) 00:14:17.725 fused_ordering(270) 00:14:17.725 fused_ordering(271) 00:14:17.725 fused_ordering(272) 00:14:17.725 fused_ordering(273) 00:14:17.725 fused_ordering(274) 00:14:17.725 fused_ordering(275) 00:14:17.725 fused_ordering(276) 00:14:17.725 fused_ordering(277) 00:14:17.725 fused_ordering(278) 00:14:17.725 fused_ordering(279) 00:14:17.725 fused_ordering(280) 00:14:17.725 fused_ordering(281) 00:14:17.725 fused_ordering(282) 00:14:17.725 fused_ordering(283) 00:14:17.725 fused_ordering(284) 00:14:17.725 fused_ordering(285) 00:14:17.725 fused_ordering(286) 00:14:17.725 fused_ordering(287) 00:14:17.725 fused_ordering(288) 00:14:17.725 fused_ordering(289) 00:14:17.725 fused_ordering(290) 00:14:17.725 fused_ordering(291) 00:14:17.725 fused_ordering(292) 00:14:17.725 fused_ordering(293) 00:14:17.725 fused_ordering(294) 00:14:17.725 fused_ordering(295) 00:14:17.725 fused_ordering(296) 00:14:17.725 fused_ordering(297) 00:14:17.725 fused_ordering(298) 00:14:17.725 fused_ordering(299) 00:14:17.725 fused_ordering(300) 00:14:17.725 fused_ordering(301) 00:14:17.725 fused_ordering(302) 00:14:17.725 fused_ordering(303) 00:14:17.725 fused_ordering(304) 00:14:17.725 fused_ordering(305) 00:14:17.725 fused_ordering(306) 00:14:17.725 fused_ordering(307) 00:14:17.725 fused_ordering(308) 00:14:17.725 fused_ordering(309) 00:14:17.725 fused_ordering(310) 00:14:17.725 fused_ordering(311) 00:14:17.725 fused_ordering(312) 00:14:17.725 fused_ordering(313) 00:14:17.725 fused_ordering(314) 00:14:17.725 fused_ordering(315) 00:14:17.725 fused_ordering(316) 00:14:17.725 fused_ordering(317) 00:14:17.725 fused_ordering(318) 00:14:17.725 fused_ordering(319) 00:14:17.725 fused_ordering(320) 00:14:17.725 fused_ordering(321) 00:14:17.725 fused_ordering(322) 00:14:17.725 fused_ordering(323) 00:14:17.725 fused_ordering(324) 00:14:17.725 fused_ordering(325) 00:14:17.725 fused_ordering(326) 00:14:17.725 fused_ordering(327) 00:14:17.725 fused_ordering(328) 00:14:17.725 fused_ordering(329) 00:14:17.725 fused_ordering(330) 00:14:17.725 fused_ordering(331) 00:14:17.725 fused_ordering(332) 00:14:17.725 fused_ordering(333) 00:14:17.725 fused_ordering(334) 00:14:17.725 fused_ordering(335) 00:14:17.725 fused_ordering(336) 00:14:17.725 fused_ordering(337) 00:14:17.725 fused_ordering(338) 00:14:17.725 fused_ordering(339) 00:14:17.725 fused_ordering(340) 00:14:17.725 fused_ordering(341) 00:14:17.725 fused_ordering(342) 00:14:17.725 fused_ordering(343) 00:14:17.725 fused_ordering(344) 00:14:17.725 fused_ordering(345) 00:14:17.725 fused_ordering(346) 00:14:17.725 fused_ordering(347) 00:14:17.725 fused_ordering(348) 00:14:17.725 fused_ordering(349) 00:14:17.725 fused_ordering(350) 00:14:17.725 fused_ordering(351) 00:14:17.725 fused_ordering(352) 00:14:17.725 fused_ordering(353) 00:14:17.725 fused_ordering(354) 00:14:17.725 fused_ordering(355) 00:14:17.725 fused_ordering(356) 00:14:17.725 fused_ordering(357) 00:14:17.725 fused_ordering(358) 00:14:17.725 fused_ordering(359) 00:14:17.725 fused_ordering(360) 00:14:17.725 fused_ordering(361) 00:14:17.725 fused_ordering(362) 00:14:17.725 fused_ordering(363) 00:14:17.725 fused_ordering(364) 00:14:17.725 fused_ordering(365) 00:14:17.725 fused_ordering(366) 00:14:17.725 fused_ordering(367) 00:14:17.725 fused_ordering(368) 00:14:17.725 fused_ordering(369) 00:14:17.725 fused_ordering(370) 00:14:17.725 fused_ordering(371) 00:14:17.725 fused_ordering(372) 00:14:17.725 fused_ordering(373) 00:14:17.725 fused_ordering(374) 00:14:17.725 fused_ordering(375) 00:14:17.725 fused_ordering(376) 00:14:17.725 fused_ordering(377) 00:14:17.725 fused_ordering(378) 00:14:17.725 fused_ordering(379) 00:14:17.725 fused_ordering(380) 00:14:17.725 fused_ordering(381) 00:14:17.725 fused_ordering(382) 00:14:17.725 fused_ordering(383) 00:14:17.725 fused_ordering(384) 00:14:17.725 fused_ordering(385) 00:14:17.725 fused_ordering(386) 00:14:17.725 fused_ordering(387) 00:14:17.725 fused_ordering(388) 00:14:17.725 fused_ordering(389) 00:14:17.725 fused_ordering(390) 00:14:17.725 fused_ordering(391) 00:14:17.725 fused_ordering(392) 00:14:17.725 fused_ordering(393) 00:14:17.725 fused_ordering(394) 00:14:17.725 fused_ordering(395) 00:14:17.725 fused_ordering(396) 00:14:17.725 fused_ordering(397) 00:14:17.725 fused_ordering(398) 00:14:17.725 fused_ordering(399) 00:14:17.725 fused_ordering(400) 00:14:17.725 fused_ordering(401) 00:14:17.725 fused_ordering(402) 00:14:17.725 fused_ordering(403) 00:14:17.725 fused_ordering(404) 00:14:17.725 fused_ordering(405) 00:14:17.725 fused_ordering(406) 00:14:17.725 fused_ordering(407) 00:14:17.725 fused_ordering(408) 00:14:17.725 fused_ordering(409) 00:14:17.725 fused_ordering(410) 00:14:18.294 fused_ordering(411) 00:14:18.294 fused_ordering(412) 00:14:18.294 fused_ordering(413) 00:14:18.294 fused_ordering(414) 00:14:18.294 fused_ordering(415) 00:14:18.294 fused_ordering(416) 00:14:18.294 fused_ordering(417) 00:14:18.294 fused_ordering(418) 00:14:18.294 fused_ordering(419) 00:14:18.294 fused_ordering(420) 00:14:18.294 fused_ordering(421) 00:14:18.294 fused_ordering(422) 00:14:18.294 fused_ordering(423) 00:14:18.294 fused_ordering(424) 00:14:18.294 fused_ordering(425) 00:14:18.294 fused_ordering(426) 00:14:18.294 fused_ordering(427) 00:14:18.294 fused_ordering(428) 00:14:18.294 fused_ordering(429) 00:14:18.294 fused_ordering(430) 00:14:18.294 fused_ordering(431) 00:14:18.294 fused_ordering(432) 00:14:18.294 fused_ordering(433) 00:14:18.295 fused_ordering(434) 00:14:18.295 fused_ordering(435) 00:14:18.295 fused_ordering(436) 00:14:18.295 fused_ordering(437) 00:14:18.295 fused_ordering(438) 00:14:18.295 fused_ordering(439) 00:14:18.295 fused_ordering(440) 00:14:18.295 fused_ordering(441) 00:14:18.295 fused_ordering(442) 00:14:18.295 fused_ordering(443) 00:14:18.295 fused_ordering(444) 00:14:18.295 fused_ordering(445) 00:14:18.295 fused_ordering(446) 00:14:18.295 fused_ordering(447) 00:14:18.295 fused_ordering(448) 00:14:18.295 fused_ordering(449) 00:14:18.295 fused_ordering(450) 00:14:18.295 fused_ordering(451) 00:14:18.295 fused_ordering(452) 00:14:18.295 fused_ordering(453) 00:14:18.295 fused_ordering(454) 00:14:18.295 fused_ordering(455) 00:14:18.295 fused_ordering(456) 00:14:18.295 fused_ordering(457) 00:14:18.295 fused_ordering(458) 00:14:18.295 fused_ordering(459) 00:14:18.295 fused_ordering(460) 00:14:18.295 fused_ordering(461) 00:14:18.295 fused_ordering(462) 00:14:18.295 fused_ordering(463) 00:14:18.295 fused_ordering(464) 00:14:18.295 fused_ordering(465) 00:14:18.295 fused_ordering(466) 00:14:18.295 fused_ordering(467) 00:14:18.295 fused_ordering(468) 00:14:18.295 fused_ordering(469) 00:14:18.295 fused_ordering(470) 00:14:18.295 fused_ordering(471) 00:14:18.295 fused_ordering(472) 00:14:18.295 fused_ordering(473) 00:14:18.295 fused_ordering(474) 00:14:18.295 fused_ordering(475) 00:14:18.295 fused_ordering(476) 00:14:18.295 fused_ordering(477) 00:14:18.295 fused_ordering(478) 00:14:18.295 fused_ordering(479) 00:14:18.295 fused_ordering(480) 00:14:18.295 fused_ordering(481) 00:14:18.295 fused_ordering(482) 00:14:18.295 fused_ordering(483) 00:14:18.295 fused_ordering(484) 00:14:18.295 fused_ordering(485) 00:14:18.295 fused_ordering(486) 00:14:18.295 fused_ordering(487) 00:14:18.295 fused_ordering(488) 00:14:18.295 fused_ordering(489) 00:14:18.295 fused_ordering(490) 00:14:18.295 fused_ordering(491) 00:14:18.295 fused_ordering(492) 00:14:18.295 fused_ordering(493) 00:14:18.295 fused_ordering(494) 00:14:18.295 fused_ordering(495) 00:14:18.295 fused_ordering(496) 00:14:18.295 fused_ordering(497) 00:14:18.295 fused_ordering(498) 00:14:18.295 fused_ordering(499) 00:14:18.295 fused_ordering(500) 00:14:18.295 fused_ordering(501) 00:14:18.295 fused_ordering(502) 00:14:18.295 fused_ordering(503) 00:14:18.295 fused_ordering(504) 00:14:18.295 fused_ordering(505) 00:14:18.295 fused_ordering(506) 00:14:18.295 fused_ordering(507) 00:14:18.295 fused_ordering(508) 00:14:18.295 fused_ordering(509) 00:14:18.295 fused_ordering(510) 00:14:18.295 fused_ordering(511) 00:14:18.295 fused_ordering(512) 00:14:18.295 fused_ordering(513) 00:14:18.295 fused_ordering(514) 00:14:18.295 fused_ordering(515) 00:14:18.295 fused_ordering(516) 00:14:18.295 fused_ordering(517) 00:14:18.295 fused_ordering(518) 00:14:18.295 fused_ordering(519) 00:14:18.295 fused_ordering(520) 00:14:18.295 fused_ordering(521) 00:14:18.295 fused_ordering(522) 00:14:18.295 fused_ordering(523) 00:14:18.295 fused_ordering(524) 00:14:18.295 fused_ordering(525) 00:14:18.295 fused_ordering(526) 00:14:18.295 fused_ordering(527) 00:14:18.295 fused_ordering(528) 00:14:18.295 fused_ordering(529) 00:14:18.295 fused_ordering(530) 00:14:18.295 fused_ordering(531) 00:14:18.295 fused_ordering(532) 00:14:18.295 fused_ordering(533) 00:14:18.295 fused_ordering(534) 00:14:18.295 fused_ordering(535) 00:14:18.295 fused_ordering(536) 00:14:18.295 fused_ordering(537) 00:14:18.295 fused_ordering(538) 00:14:18.295 fused_ordering(539) 00:14:18.295 fused_ordering(540) 00:14:18.295 fused_ordering(541) 00:14:18.295 fused_ordering(542) 00:14:18.295 fused_ordering(543) 00:14:18.295 fused_ordering(544) 00:14:18.295 fused_ordering(545) 00:14:18.295 fused_ordering(546) 00:14:18.295 fused_ordering(547) 00:14:18.295 fused_ordering(548) 00:14:18.295 fused_ordering(549) 00:14:18.295 fused_ordering(550) 00:14:18.295 fused_ordering(551) 00:14:18.295 fused_ordering(552) 00:14:18.295 fused_ordering(553) 00:14:18.295 fused_ordering(554) 00:14:18.295 fused_ordering(555) 00:14:18.295 fused_ordering(556) 00:14:18.295 fused_ordering(557) 00:14:18.295 fused_ordering(558) 00:14:18.295 fused_ordering(559) 00:14:18.295 fused_ordering(560) 00:14:18.295 fused_ordering(561) 00:14:18.295 fused_ordering(562) 00:14:18.295 fused_ordering(563) 00:14:18.295 fused_ordering(564) 00:14:18.295 fused_ordering(565) 00:14:18.295 fused_ordering(566) 00:14:18.295 fused_ordering(567) 00:14:18.295 fused_ordering(568) 00:14:18.295 fused_ordering(569) 00:14:18.295 fused_ordering(570) 00:14:18.295 fused_ordering(571) 00:14:18.295 fused_ordering(572) 00:14:18.295 fused_ordering(573) 00:14:18.295 fused_ordering(574) 00:14:18.295 fused_ordering(575) 00:14:18.295 fused_ordering(576) 00:14:18.295 fused_ordering(577) 00:14:18.295 fused_ordering(578) 00:14:18.295 fused_ordering(579) 00:14:18.295 fused_ordering(580) 00:14:18.295 fused_ordering(581) 00:14:18.295 fused_ordering(582) 00:14:18.295 fused_ordering(583) 00:14:18.295 fused_ordering(584) 00:14:18.295 fused_ordering(585) 00:14:18.295 fused_ordering(586) 00:14:18.295 fused_ordering(587) 00:14:18.295 fused_ordering(588) 00:14:18.295 fused_ordering(589) 00:14:18.295 fused_ordering(590) 00:14:18.295 fused_ordering(591) 00:14:18.295 fused_ordering(592) 00:14:18.295 fused_ordering(593) 00:14:18.295 fused_ordering(594) 00:14:18.295 fused_ordering(595) 00:14:18.295 fused_ordering(596) 00:14:18.295 fused_ordering(597) 00:14:18.295 fused_ordering(598) 00:14:18.295 fused_ordering(599) 00:14:18.295 fused_ordering(600) 00:14:18.295 fused_ordering(601) 00:14:18.295 fused_ordering(602) 00:14:18.295 fused_ordering(603) 00:14:18.295 fused_ordering(604) 00:14:18.295 fused_ordering(605) 00:14:18.295 fused_ordering(606) 00:14:18.295 fused_ordering(607) 00:14:18.295 fused_ordering(608) 00:14:18.295 fused_ordering(609) 00:14:18.295 fused_ordering(610) 00:14:18.295 fused_ordering(611) 00:14:18.295 fused_ordering(612) 00:14:18.295 fused_ordering(613) 00:14:18.295 fused_ordering(614) 00:14:18.295 fused_ordering(615) 00:14:18.864 fused_ordering(616) 00:14:18.864 fused_ordering(617) 00:14:18.864 fused_ordering(618) 00:14:18.864 fused_ordering(619) 00:14:18.864 fused_ordering(620) 00:14:18.864 fused_ordering(621) 00:14:18.864 fused_ordering(622) 00:14:18.864 fused_ordering(623) 00:14:18.864 fused_ordering(624) 00:14:18.864 fused_ordering(625) 00:14:18.864 fused_ordering(626) 00:14:18.864 fused_ordering(627) 00:14:18.864 fused_ordering(628) 00:14:18.864 fused_ordering(629) 00:14:18.864 fused_ordering(630) 00:14:18.864 fused_ordering(631) 00:14:18.864 fused_ordering(632) 00:14:18.864 fused_ordering(633) 00:14:18.864 fused_ordering(634) 00:14:18.864 fused_ordering(635) 00:14:18.864 fused_ordering(636) 00:14:18.864 fused_ordering(637) 00:14:18.864 fused_ordering(638) 00:14:18.864 fused_ordering(639) 00:14:18.864 fused_ordering(640) 00:14:18.864 fused_ordering(641) 00:14:18.864 fused_ordering(642) 00:14:18.864 fused_ordering(643) 00:14:18.864 fused_ordering(644) 00:14:18.864 fused_ordering(645) 00:14:18.864 fused_ordering(646) 00:14:18.864 fused_ordering(647) 00:14:18.864 fused_ordering(648) 00:14:18.864 fused_ordering(649) 00:14:18.864 fused_ordering(650) 00:14:18.864 fused_ordering(651) 00:14:18.864 fused_ordering(652) 00:14:18.864 fused_ordering(653) 00:14:18.864 fused_ordering(654) 00:14:18.864 fused_ordering(655) 00:14:18.864 fused_ordering(656) 00:14:18.864 fused_ordering(657) 00:14:18.864 fused_ordering(658) 00:14:18.864 fused_ordering(659) 00:14:18.865 fused_ordering(660) 00:14:18.865 fused_ordering(661) 00:14:18.865 fused_ordering(662) 00:14:18.865 fused_ordering(663) 00:14:18.865 fused_ordering(664) 00:14:18.865 fused_ordering(665) 00:14:18.865 fused_ordering(666) 00:14:18.865 fused_ordering(667) 00:14:18.865 fused_ordering(668) 00:14:18.865 fused_ordering(669) 00:14:18.865 fused_ordering(670) 00:14:18.865 fused_ordering(671) 00:14:18.865 fused_ordering(672) 00:14:18.865 fused_ordering(673) 00:14:18.865 fused_ordering(674) 00:14:18.865 fused_ordering(675) 00:14:18.865 fused_ordering(676) 00:14:18.865 fused_ordering(677) 00:14:18.865 fused_ordering(678) 00:14:18.865 fused_ordering(679) 00:14:18.865 fused_ordering(680) 00:14:18.865 fused_ordering(681) 00:14:18.865 fused_ordering(682) 00:14:18.865 fused_ordering(683) 00:14:18.865 fused_ordering(684) 00:14:18.865 fused_ordering(685) 00:14:18.865 fused_ordering(686) 00:14:18.865 fused_ordering(687) 00:14:18.865 fused_ordering(688) 00:14:18.865 fused_ordering(689) 00:14:18.865 fused_ordering(690) 00:14:18.865 fused_ordering(691) 00:14:18.865 fused_ordering(692) 00:14:18.865 fused_ordering(693) 00:14:18.865 fused_ordering(694) 00:14:18.865 fused_ordering(695) 00:14:18.865 fused_ordering(696) 00:14:18.865 fused_ordering(697) 00:14:18.865 fused_ordering(698) 00:14:18.865 fused_ordering(699) 00:14:18.865 fused_ordering(700) 00:14:18.865 fused_ordering(701) 00:14:18.865 fused_ordering(702) 00:14:18.865 fused_ordering(703) 00:14:18.865 fused_ordering(704) 00:14:18.865 fused_ordering(705) 00:14:18.865 fused_ordering(706) 00:14:18.865 fused_ordering(707) 00:14:18.865 fused_ordering(708) 00:14:18.865 fused_ordering(709) 00:14:18.865 fused_ordering(710) 00:14:18.865 fused_ordering(711) 00:14:18.865 fused_ordering(712) 00:14:18.865 fused_ordering(713) 00:14:18.865 fused_ordering(714) 00:14:18.865 fused_ordering(715) 00:14:18.865 fused_ordering(716) 00:14:18.865 fused_ordering(717) 00:14:18.865 fused_ordering(718) 00:14:18.865 fused_ordering(719) 00:14:18.865 fused_ordering(720) 00:14:18.865 fused_ordering(721) 00:14:18.865 fused_ordering(722) 00:14:18.865 fused_ordering(723) 00:14:18.865 fused_ordering(724) 00:14:18.865 fused_ordering(725) 00:14:18.865 fused_ordering(726) 00:14:18.865 fused_ordering(727) 00:14:18.865 fused_ordering(728) 00:14:18.865 fused_ordering(729) 00:14:18.865 fused_ordering(730) 00:14:18.865 fused_ordering(731) 00:14:18.865 fused_ordering(732) 00:14:18.865 fused_ordering(733) 00:14:18.865 fused_ordering(734) 00:14:18.865 fused_ordering(735) 00:14:18.865 fused_ordering(736) 00:14:18.865 fused_ordering(737) 00:14:18.865 fused_ordering(738) 00:14:18.865 fused_ordering(739) 00:14:18.865 fused_ordering(740) 00:14:18.865 fused_ordering(741) 00:14:18.865 fused_ordering(742) 00:14:18.865 fused_ordering(743) 00:14:18.865 fused_ordering(744) 00:14:18.865 fused_ordering(745) 00:14:18.865 fused_ordering(746) 00:14:18.865 fused_ordering(747) 00:14:18.865 fused_ordering(748) 00:14:18.865 fused_ordering(749) 00:14:18.865 fused_ordering(750) 00:14:18.865 fused_ordering(751) 00:14:18.865 fused_ordering(752) 00:14:18.865 fused_ordering(753) 00:14:18.865 fused_ordering(754) 00:14:18.865 fused_ordering(755) 00:14:18.865 fused_ordering(756) 00:14:18.865 fused_ordering(757) 00:14:18.865 fused_ordering(758) 00:14:18.865 fused_ordering(759) 00:14:18.865 fused_ordering(760) 00:14:18.865 fused_ordering(761) 00:14:18.865 fused_ordering(762) 00:14:18.865 fused_ordering(763) 00:14:18.865 fused_ordering(764) 00:14:18.865 fused_ordering(765) 00:14:18.865 fused_ordering(766) 00:14:18.865 fused_ordering(767) 00:14:18.865 fused_ordering(768) 00:14:18.865 fused_ordering(769) 00:14:18.865 fused_ordering(770) 00:14:18.865 fused_ordering(771) 00:14:18.865 fused_ordering(772) 00:14:18.865 fused_ordering(773) 00:14:18.865 fused_ordering(774) 00:14:18.865 fused_ordering(775) 00:14:18.865 fused_ordering(776) 00:14:18.865 fused_ordering(777) 00:14:18.865 fused_ordering(778) 00:14:18.865 fused_ordering(779) 00:14:18.865 fused_ordering(780) 00:14:18.865 fused_ordering(781) 00:14:18.865 fused_ordering(782) 00:14:18.865 fused_ordering(783) 00:14:18.865 fused_ordering(784) 00:14:18.865 fused_ordering(785) 00:14:18.865 fused_ordering(786) 00:14:18.865 fused_ordering(787) 00:14:18.865 fused_ordering(788) 00:14:18.865 fused_ordering(789) 00:14:18.865 fused_ordering(790) 00:14:18.865 fused_ordering(791) 00:14:18.865 fused_ordering(792) 00:14:18.865 fused_ordering(793) 00:14:18.865 fused_ordering(794) 00:14:18.865 fused_ordering(795) 00:14:18.865 fused_ordering(796) 00:14:18.865 fused_ordering(797) 00:14:18.865 fused_ordering(798) 00:14:18.865 fused_ordering(799) 00:14:18.865 fused_ordering(800) 00:14:18.865 fused_ordering(801) 00:14:18.865 fused_ordering(802) 00:14:18.865 fused_ordering(803) 00:14:18.865 fused_ordering(804) 00:14:18.865 fused_ordering(805) 00:14:18.865 fused_ordering(806) 00:14:18.865 fused_ordering(807) 00:14:18.865 fused_ordering(808) 00:14:18.865 fused_ordering(809) 00:14:18.865 fused_ordering(810) 00:14:18.865 fused_ordering(811) 00:14:18.865 fused_ordering(812) 00:14:18.865 fused_ordering(813) 00:14:18.865 fused_ordering(814) 00:14:18.865 fused_ordering(815) 00:14:18.865 fused_ordering(816) 00:14:18.865 fused_ordering(817) 00:14:18.865 fused_ordering(818) 00:14:18.865 fused_ordering(819) 00:14:18.865 fused_ordering(820) 00:14:19.434 fused_ordering(821) 00:14:19.434 fused_ordering(822) 00:14:19.434 fused_ordering(823) 00:14:19.434 fused_ordering(824) 00:14:19.434 fused_ordering(825) 00:14:19.434 fused_ordering(826) 00:14:19.434 fused_ordering(827) 00:14:19.434 fused_ordering(828) 00:14:19.434 fused_ordering(829) 00:14:19.434 fused_ordering(830) 00:14:19.434 fused_ordering(831) 00:14:19.434 fused_ordering(832) 00:14:19.434 fused_ordering(833) 00:14:19.434 fused_ordering(834) 00:14:19.434 fused_ordering(835) 00:14:19.434 fused_ordering(836) 00:14:19.434 fused_ordering(837) 00:14:19.434 fused_ordering(838) 00:14:19.434 fused_ordering(839) 00:14:19.434 fused_ordering(840) 00:14:19.434 fused_ordering(841) 00:14:19.434 fused_ordering(842) 00:14:19.434 fused_ordering(843) 00:14:19.434 fused_ordering(844) 00:14:19.434 fused_ordering(845) 00:14:19.434 fused_ordering(846) 00:14:19.434 fused_ordering(847) 00:14:19.434 fused_ordering(848) 00:14:19.434 fused_ordering(849) 00:14:19.434 fused_ordering(850) 00:14:19.434 fused_ordering(851) 00:14:19.434 fused_ordering(852) 00:14:19.434 fused_ordering(853) 00:14:19.434 fused_ordering(854) 00:14:19.434 fused_ordering(855) 00:14:19.434 fused_ordering(856) 00:14:19.434 fused_ordering(857) 00:14:19.434 fused_ordering(858) 00:14:19.434 fused_ordering(859) 00:14:19.434 fused_ordering(860) 00:14:19.434 fused_ordering(861) 00:14:19.434 fused_ordering(862) 00:14:19.434 fused_ordering(863) 00:14:19.434 fused_ordering(864) 00:14:19.434 fused_ordering(865) 00:14:19.434 fused_ordering(866) 00:14:19.434 fused_ordering(867) 00:14:19.434 fused_ordering(868) 00:14:19.434 fused_ordering(869) 00:14:19.434 fused_ordering(870) 00:14:19.434 fused_ordering(871) 00:14:19.434 fused_ordering(872) 00:14:19.434 fused_ordering(873) 00:14:19.434 fused_ordering(874) 00:14:19.434 fused_ordering(875) 00:14:19.434 fused_ordering(876) 00:14:19.434 fused_ordering(877) 00:14:19.434 fused_ordering(878) 00:14:19.434 fused_ordering(879) 00:14:19.434 fused_ordering(880) 00:14:19.434 fused_ordering(881) 00:14:19.434 fused_ordering(882) 00:14:19.434 fused_ordering(883) 00:14:19.434 fused_ordering(884) 00:14:19.434 fused_ordering(885) 00:14:19.434 fused_ordering(886) 00:14:19.434 fused_ordering(887) 00:14:19.434 fused_ordering(888) 00:14:19.434 fused_ordering(889) 00:14:19.434 fused_ordering(890) 00:14:19.434 fused_ordering(891) 00:14:19.434 fused_ordering(892) 00:14:19.434 fused_ordering(893) 00:14:19.434 fused_ordering(894) 00:14:19.434 fused_ordering(895) 00:14:19.434 fused_ordering(896) 00:14:19.434 fused_ordering(897) 00:14:19.434 fused_ordering(898) 00:14:19.434 fused_ordering(899) 00:14:19.434 fused_ordering(900) 00:14:19.434 fused_ordering(901) 00:14:19.434 fused_ordering(902) 00:14:19.434 fused_ordering(903) 00:14:19.434 fused_ordering(904) 00:14:19.434 fused_ordering(905) 00:14:19.434 fused_ordering(906) 00:14:19.434 fused_ordering(907) 00:14:19.434 fused_ordering(908) 00:14:19.434 fused_ordering(909) 00:14:19.434 fused_ordering(910) 00:14:19.434 fused_ordering(911) 00:14:19.434 fused_ordering(912) 00:14:19.434 fused_ordering(913) 00:14:19.434 fused_ordering(914) 00:14:19.434 fused_ordering(915) 00:14:19.434 fused_ordering(916) 00:14:19.434 fused_ordering(917) 00:14:19.434 fused_ordering(918) 00:14:19.434 fused_ordering(919) 00:14:19.434 fused_ordering(920) 00:14:19.434 fused_ordering(921) 00:14:19.434 fused_ordering(922) 00:14:19.434 fused_ordering(923) 00:14:19.434 fused_ordering(924) 00:14:19.434 fused_ordering(925) 00:14:19.434 fused_ordering(926) 00:14:19.434 fused_ordering(927) 00:14:19.434 fused_ordering(928) 00:14:19.434 fused_ordering(929) 00:14:19.434 fused_ordering(930) 00:14:19.434 fused_ordering(931) 00:14:19.434 fused_ordering(932) 00:14:19.434 fused_ordering(933) 00:14:19.435 fused_ordering(934) 00:14:19.435 fused_ordering(935) 00:14:19.435 fused_ordering(936) 00:14:19.435 fused_ordering(937) 00:14:19.435 fused_ordering(938) 00:14:19.435 fused_ordering(939) 00:14:19.435 fused_ordering(940) 00:14:19.435 fused_ordering(941) 00:14:19.435 fused_ordering(942) 00:14:19.435 fused_ordering(943) 00:14:19.435 fused_ordering(944) 00:14:19.435 fused_ordering(945) 00:14:19.435 fused_ordering(946) 00:14:19.435 fused_ordering(947) 00:14:19.435 fused_ordering(948) 00:14:19.435 fused_ordering(949) 00:14:19.435 fused_ordering(950) 00:14:19.435 fused_ordering(951) 00:14:19.435 fused_ordering(952) 00:14:19.435 fused_ordering(953) 00:14:19.435 fused_ordering(954) 00:14:19.435 fused_ordering(955) 00:14:19.435 fused_ordering(956) 00:14:19.435 fused_ordering(957) 00:14:19.435 fused_ordering(958) 00:14:19.435 fused_ordering(959) 00:14:19.435 fused_ordering(960) 00:14:19.435 fused_ordering(961) 00:14:19.435 fused_ordering(962) 00:14:19.435 fused_ordering(963) 00:14:19.435 fused_ordering(964) 00:14:19.435 fused_ordering(965) 00:14:19.435 fused_ordering(966) 00:14:19.435 fused_ordering(967) 00:14:19.435 fused_ordering(968) 00:14:19.435 fused_ordering(969) 00:14:19.435 fused_ordering(970) 00:14:19.435 fused_ordering(971) 00:14:19.435 fused_ordering(972) 00:14:19.435 fused_ordering(973) 00:14:19.435 fused_ordering(974) 00:14:19.435 fused_ordering(975) 00:14:19.435 fused_ordering(976) 00:14:19.435 fused_ordering(977) 00:14:19.435 fused_ordering(978) 00:14:19.435 fused_ordering(979) 00:14:19.435 fused_ordering(980) 00:14:19.435 fused_ordering(981) 00:14:19.435 fused_ordering(982) 00:14:19.435 fused_ordering(983) 00:14:19.435 fused_ordering(984) 00:14:19.435 fused_ordering(985) 00:14:19.435 fused_ordering(986) 00:14:19.435 fused_ordering(987) 00:14:19.435 fused_ordering(988) 00:14:19.435 fused_ordering(989) 00:14:19.435 fused_ordering(990) 00:14:19.435 fused_ordering(991) 00:14:19.435 fused_ordering(992) 00:14:19.435 fused_ordering(993) 00:14:19.435 fused_ordering(994) 00:14:19.435 fused_ordering(995) 00:14:19.435 fused_ordering(996) 00:14:19.435 fused_ordering(997) 00:14:19.435 fused_ordering(998) 00:14:19.435 fused_ordering(999) 00:14:19.435 fused_ordering(1000) 00:14:19.435 fused_ordering(1001) 00:14:19.435 fused_ordering(1002) 00:14:19.435 fused_ordering(1003) 00:14:19.435 fused_ordering(1004) 00:14:19.435 fused_ordering(1005) 00:14:19.435 fused_ordering(1006) 00:14:19.435 fused_ordering(1007) 00:14:19.435 fused_ordering(1008) 00:14:19.435 fused_ordering(1009) 00:14:19.435 fused_ordering(1010) 00:14:19.435 fused_ordering(1011) 00:14:19.435 fused_ordering(1012) 00:14:19.435 fused_ordering(1013) 00:14:19.435 fused_ordering(1014) 00:14:19.435 fused_ordering(1015) 00:14:19.435 fused_ordering(1016) 00:14:19.435 fused_ordering(1017) 00:14:19.435 fused_ordering(1018) 00:14:19.435 fused_ordering(1019) 00:14:19.435 fused_ordering(1020) 00:14:19.435 fused_ordering(1021) 00:14:19.435 fused_ordering(1022) 00:14:19.435 fused_ordering(1023) 00:14:19.435 08:14:53 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:19.435 08:14:53 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:19.435 08:14:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:19.435 08:14:53 -- nvmf/common.sh@116 -- # sync 00:14:19.435 08:14:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:19.435 08:14:53 -- nvmf/common.sh@119 -- # set +e 00:14:19.435 08:14:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:19.435 08:14:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:19.435 rmmod nvme_tcp 00:14:19.435 rmmod nvme_fabrics 00:14:19.695 rmmod nvme_keyring 00:14:19.695 08:14:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:19.695 08:14:53 -- nvmf/common.sh@123 -- # set -e 00:14:19.695 08:14:53 -- nvmf/common.sh@124 -- # return 0 00:14:19.695 08:14:53 -- nvmf/common.sh@477 -- # '[' -n 2203651 ']' 00:14:19.695 08:14:53 -- nvmf/common.sh@478 -- # killprocess 2203651 00:14:19.695 08:14:53 -- common/autotest_common.sh@924 -- # '[' -z 2203651 ']' 00:14:19.695 08:14:53 -- common/autotest_common.sh@928 -- # kill -0 2203651 00:14:19.695 08:14:53 -- common/autotest_common.sh@929 -- # uname 00:14:19.695 08:14:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:19.695 08:14:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2203651 00:14:19.695 08:14:53 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:14:19.695 08:14:53 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:14:19.695 08:14:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2203651' 00:14:19.695 killing process with pid 2203651 00:14:19.695 08:14:53 -- common/autotest_common.sh@943 -- # kill 2203651 00:14:19.695 08:14:53 -- common/autotest_common.sh@948 -- # wait 2203651 00:14:19.954 08:14:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:19.954 08:14:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:19.954 08:14:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:19.954 08:14:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.954 08:14:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:19.954 08:14:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.954 08:14:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.954 08:14:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.904 08:14:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:21.904 00:14:21.904 real 0m12.326s 00:14:21.904 user 0m6.962s 00:14:21.904 sys 0m6.669s 00:14:21.904 08:14:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.904 08:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 ************************************ 00:14:21.904 END TEST nvmf_fused_ordering 00:14:21.904 ************************************ 00:14:21.904 08:14:55 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.904 08:14:55 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:21.904 08:14:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:21.904 08:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:21.904 ************************************ 00:14:21.904 START TEST nvmf_delete_subsystem 00:14:21.904 ************************************ 00:14:21.904 08:14:55 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.904 * Looking for test storage... 00:14:21.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.904 08:14:55 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.904 08:14:55 -- nvmf/common.sh@7 -- # uname -s 00:14:21.904 08:14:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.905 08:14:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.905 08:14:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.905 08:14:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.905 08:14:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.905 08:14:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.905 08:14:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.905 08:14:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.905 08:14:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.905 08:14:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.905 08:14:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:21.905 08:14:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:21.905 08:14:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.905 08:14:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.905 08:14:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.905 08:14:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.905 08:14:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.905 08:14:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.905 08:14:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.905 08:14:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.905 08:14:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.905 08:14:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.905 08:14:55 -- paths/export.sh@5 -- # export PATH 00:14:21.905 08:14:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.905 08:14:55 -- nvmf/common.sh@46 -- # : 0 00:14:21.905 08:14:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:21.905 08:14:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:21.905 08:14:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:21.905 08:14:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.905 08:14:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.905 08:14:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:21.905 08:14:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:21.905 08:14:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:21.905 08:14:55 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:21.905 08:14:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:21.905 08:14:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.905 08:14:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:21.905 08:14:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:21.905 08:14:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:21.905 08:14:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.905 08:14:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.905 08:14:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.164 08:14:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:22.164 08:14:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:22.164 08:14:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:22.164 08:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:28.731 08:15:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:28.731 08:15:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:28.731 08:15:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:28.731 08:15:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:28.731 08:15:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:28.731 08:15:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:28.731 08:15:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:28.731 08:15:01 -- nvmf/common.sh@294 -- # net_devs=() 00:14:28.731 08:15:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:28.731 08:15:01 -- nvmf/common.sh@295 -- # e810=() 00:14:28.731 08:15:01 -- nvmf/common.sh@295 -- # local -ga e810 00:14:28.731 08:15:01 -- nvmf/common.sh@296 -- # x722=() 00:14:28.731 08:15:01 -- nvmf/common.sh@296 -- # local -ga x722 00:14:28.731 08:15:01 -- nvmf/common.sh@297 -- # mlx=() 00:14:28.731 08:15:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:28.731 08:15:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.731 08:15:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:28.731 08:15:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:28.731 08:15:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:28.731 08:15:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:28.731 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:28.731 08:15:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:28.731 08:15:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:28.731 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:28.731 08:15:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:28.731 08:15:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.731 08:15:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.731 08:15:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:28.731 Found net devices under 0000:af:00.0: cvl_0_0 00:14:28.731 08:15:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.731 08:15:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:28.731 08:15:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.731 08:15:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.731 08:15:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:28.731 Found net devices under 0000:af:00.1: cvl_0_1 00:14:28.731 08:15:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.731 08:15:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:28.731 08:15:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:28.731 08:15:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:28.731 08:15:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.731 08:15:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.731 08:15:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.731 08:15:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:28.731 08:15:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.731 08:15:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.731 08:15:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:28.731 08:15:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.731 08:15:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.731 08:15:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:28.731 08:15:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:28.731 08:15:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.731 08:15:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.731 08:15:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.731 08:15:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.731 08:15:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:28.731 08:15:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.731 08:15:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.731 08:15:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.731 08:15:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:28.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:14:28.731 00:14:28.731 --- 10.0.0.2 ping statistics --- 00:14:28.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.731 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:28.731 08:15:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:14:28.731 00:14:28.731 --- 10.0.0.1 ping statistics --- 00:14:28.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.731 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:28.732 08:15:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.732 08:15:01 -- nvmf/common.sh@410 -- # return 0 00:14:28.732 08:15:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:28.732 08:15:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.732 08:15:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:28.732 08:15:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:28.732 08:15:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.732 08:15:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:28.732 08:15:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:28.732 08:15:01 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:28.732 08:15:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:28.732 08:15:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:28.732 08:15:01 -- common/autotest_common.sh@10 -- # set +x 00:14:28.732 08:15:01 -- nvmf/common.sh@469 -- # nvmfpid=2208250 00:14:28.732 08:15:01 -- nvmf/common.sh@470 -- # waitforlisten 2208250 00:14:28.732 08:15:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:28.732 08:15:01 -- common/autotest_common.sh@817 -- # '[' -z 2208250 ']' 00:14:28.732 08:15:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.732 08:15:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.732 08:15:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.732 08:15:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.732 08:15:01 -- common/autotest_common.sh@10 -- # set +x 00:14:28.732 [2024-02-13 08:15:01.588783] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:28.732 [2024-02-13 08:15:01.588826] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.732 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.732 [2024-02-13 08:15:01.654208] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.732 [2024-02-13 08:15:01.726962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.732 [2024-02-13 08:15:01.727093] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.732 [2024-02-13 08:15:01.727102] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.732 [2024-02-13 08:15:01.727109] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.732 [2024-02-13 08:15:01.727150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.732 [2024-02-13 08:15:01.727153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.732 08:15:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.732 08:15:02 -- common/autotest_common.sh@850 -- # return 0 00:14:28.732 08:15:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:28.732 08:15:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:28.732 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.732 08:15:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.732 08:15:02 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.732 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.732 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.732 [2024-02-13 08:15:02.401947] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.732 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.732 08:15:02 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.732 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.732 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.991 08:15:02 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.991 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.991 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 [2024-02-13 08:15:02.422132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.991 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.991 08:15:02 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:28.991 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.991 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 NULL1 00:14:28.991 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.991 08:15:02 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:28.991 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.991 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.991 Delay0 00:14:28.991 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.992 08:15:02 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.992 08:15:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.992 08:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.992 08:15:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.992 08:15:02 -- target/delete_subsystem.sh@28 -- # perf_pid=2208320 00:14:28.992 08:15:02 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:28.992 08:15:02 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:28.992 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.992 [2024-02-13 08:15:02.502828] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:30.898 08:15:04 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.898 08:15:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.898 08:15:04 -- common/autotest_common.sh@10 -- # set +x 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 starting I/O failed: -6 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 starting I/O failed: -6 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 starting I/O failed: -6 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 starting I/O failed: -6 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Read completed with error (sct=0, sc=8) 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.157 starting I/O failed: -6 00:14:31.157 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 [2024-02-13 08:15:04.661602] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c9a60 is same with the state(5) to be set 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 starting I/O failed: -6 00:14:31.158 [2024-02-13 08:15:04.663457] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8c000c4e0 is same with the state(5) to be set 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:31.158 Read completed with error (sct=0, sc=8) 00:14:31.158 Write completed with error (sct=0, sc=8) 00:14:32.097 [2024-02-13 08:15:05.639677] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c0ab0 is same with the state(5) to be set 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 [2024-02-13 08:15:05.665915] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa8c000c230 is same with the state(5) to be set 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 [2024-02-13 08:15:05.666289] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d1fe0 is same with the state(5) to be set 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 [2024-02-13 08:15:05.666404] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c9d10 is same with the state(5) to be set 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 Write completed with error (sct=0, sc=8) 00:14:32.097 Read completed with error (sct=0, sc=8) 00:14:32.097 [2024-02-13 08:15:05.666545] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ca270 is same with the state(5) to be set 00:14:32.097 [2024-02-13 08:15:05.667148] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c0ab0 (9): Bad file descriptor 00:14:32.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:32.097 08:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.097 08:15:05 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:32.097 08:15:05 -- target/delete_subsystem.sh@35 -- # kill -0 2208320 00:14:32.097 08:15:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:32.097 Initializing NVMe Controllers 00:14:32.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.097 Controller IO queue size 128, less than required. 00:14:32.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:32.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:32.097 Initialization complete. Launching workers. 00:14:32.097 ======================================================== 00:14:32.097 Latency(us) 00:14:32.097 Device Information : IOPS MiB/s Average min max 00:14:32.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.98 0.08 966557.85 1111.03 1044188.17 00:14:32.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.65 0.07 923409.18 205.58 1011296.04 00:14:32.097 ======================================================== 00:14:32.097 Total : 316.63 0.15 946846.29 205.58 1044188.17 00:14:32.097 00:14:32.666 08:15:06 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:32.666 08:15:06 -- target/delete_subsystem.sh@35 -- # kill -0 2208320 00:14:32.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2208320) - No such process 00:14:32.666 08:15:06 -- target/delete_subsystem.sh@45 -- # NOT wait 2208320 00:14:32.666 08:15:06 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.666 08:15:06 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2208320 00:14:32.666 08:15:06 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:32.666 08:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.666 08:15:06 -- common/autotest_common.sh@630 -- # type -t wait 00:14:32.666 08:15:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.666 08:15:06 -- common/autotest_common.sh@641 -- # wait 2208320 00:14:32.666 08:15:06 -- common/autotest_common.sh@641 -- # es=1 00:14:32.666 08:15:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.667 08:15:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:32.667 08:15:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.667 08:15:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.667 08:15:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 08:15:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.667 08:15:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.667 08:15:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 [2024-02-13 08:15:06.193781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.667 08:15:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.667 08:15:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.667 08:15:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.667 08:15:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@54 -- # perf_pid=2208998 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:32.667 08:15:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.667 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.667 [2024-02-13 08:15:06.254061] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.236 08:15:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.236 08:15:06 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:33.236 08:15:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:33.805 08:15:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.805 08:15:07 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:33.805 08:15:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.065 08:15:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.065 08:15:07 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:34.065 08:15:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.633 08:15:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.633 08:15:08 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:34.633 08:15:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.202 08:15:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.202 08:15:08 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:35.202 08:15:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.771 08:15:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.771 08:15:09 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:35.771 08:15:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.771 Initializing NVMe Controllers 00:14:35.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.771 Controller IO queue size 128, less than required. 00:14:35.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:35.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:35.771 Initialization complete. Launching workers. 00:14:35.771 ======================================================== 00:14:35.771 Latency(us) 00:14:35.771 Device Information : IOPS MiB/s Average min max 00:14:35.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003528.67 1000283.98 1042072.18 00:14:35.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005058.62 1000396.01 1012752.78 00:14:35.771 ======================================================== 00:14:35.771 Total : 256.00 0.12 1004293.65 1000283.98 1042072.18 00:14:35.771 00:14:36.339 08:15:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.339 08:15:09 -- target/delete_subsystem.sh@57 -- # kill -0 2208998 00:14:36.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2208998) - No such process 00:14:36.339 08:15:09 -- target/delete_subsystem.sh@67 -- # wait 2208998 00:14:36.339 08:15:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:36.339 08:15:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:36.339 08:15:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:36.339 08:15:09 -- nvmf/common.sh@116 -- # sync 00:14:36.340 08:15:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:36.340 08:15:09 -- nvmf/common.sh@119 -- # set +e 00:14:36.340 08:15:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:36.340 08:15:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:36.340 rmmod nvme_tcp 00:14:36.340 rmmod nvme_fabrics 00:14:36.340 rmmod nvme_keyring 00:14:36.340 08:15:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:36.340 08:15:09 -- nvmf/common.sh@123 -- # set -e 00:14:36.340 08:15:09 -- nvmf/common.sh@124 -- # return 0 00:14:36.340 08:15:09 -- nvmf/common.sh@477 -- # '[' -n 2208250 ']' 00:14:36.340 08:15:09 -- nvmf/common.sh@478 -- # killprocess 2208250 00:14:36.340 08:15:09 -- common/autotest_common.sh@924 -- # '[' -z 2208250 ']' 00:14:36.340 08:15:09 -- common/autotest_common.sh@928 -- # kill -0 2208250 00:14:36.340 08:15:09 -- common/autotest_common.sh@929 -- # uname 00:14:36.340 08:15:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:36.340 08:15:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2208250 00:14:36.340 08:15:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:36.340 08:15:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:36.340 08:15:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2208250' 00:14:36.340 killing process with pid 2208250 00:14:36.340 08:15:09 -- common/autotest_common.sh@943 -- # kill 2208250 00:14:36.340 08:15:09 -- common/autotest_common.sh@948 -- # wait 2208250 00:14:36.599 08:15:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:36.599 08:15:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:36.599 08:15:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:36.599 08:15:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.599 08:15:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:36.599 08:15:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.599 08:15:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.599 08:15:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.507 08:15:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:38.507 00:14:38.507 real 0m16.646s 00:14:38.507 user 0m30.389s 00:14:38.507 sys 0m5.352s 00:14:38.507 08:15:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:38.507 08:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:38.507 ************************************ 00:14:38.507 END TEST nvmf_delete_subsystem 00:14:38.507 ************************************ 00:14:38.507 08:15:12 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:38.507 08:15:12 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.507 08:15:12 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:38.507 08:15:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:38.507 08:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:38.507 ************************************ 00:14:38.507 START TEST nvmf_nvme_cli 00:14:38.507 ************************************ 00:14:38.507 08:15:12 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.767 * Looking for test storage... 00:14:38.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.767 08:15:12 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.767 08:15:12 -- nvmf/common.sh@7 -- # uname -s 00:14:38.767 08:15:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.767 08:15:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.767 08:15:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.767 08:15:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.767 08:15:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.767 08:15:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.767 08:15:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.767 08:15:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.767 08:15:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.767 08:15:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.767 08:15:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:38.767 08:15:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:38.767 08:15:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.767 08:15:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.767 08:15:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.767 08:15:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.767 08:15:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.767 08:15:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.767 08:15:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.767 08:15:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.767 08:15:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.767 08:15:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.767 08:15:12 -- paths/export.sh@5 -- # export PATH 00:14:38.767 08:15:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.767 08:15:12 -- nvmf/common.sh@46 -- # : 0 00:14:38.767 08:15:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.767 08:15:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.767 08:15:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.767 08:15:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.767 08:15:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.767 08:15:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.767 08:15:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.767 08:15:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.767 08:15:12 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.768 08:15:12 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.768 08:15:12 -- target/nvme_cli.sh@14 -- # devs=() 00:14:38.768 08:15:12 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:38.768 08:15:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.768 08:15:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.768 08:15:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.768 08:15:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.768 08:15:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.768 08:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.768 08:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.768 08:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.768 08:15:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:38.768 08:15:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:38.768 08:15:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:38.768 08:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:45.339 08:15:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.339 08:15:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:45.339 08:15:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:45.339 08:15:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:45.339 08:15:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:45.339 08:15:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:45.339 08:15:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:45.339 08:15:17 -- nvmf/common.sh@294 -- # net_devs=() 00:14:45.339 08:15:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:45.339 08:15:17 -- nvmf/common.sh@295 -- # e810=() 00:14:45.339 08:15:17 -- nvmf/common.sh@295 -- # local -ga e810 00:14:45.339 08:15:17 -- nvmf/common.sh@296 -- # x722=() 00:14:45.339 08:15:17 -- nvmf/common.sh@296 -- # local -ga x722 00:14:45.339 08:15:17 -- nvmf/common.sh@297 -- # mlx=() 00:14:45.339 08:15:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:45.339 08:15:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.339 08:15:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:45.339 08:15:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:45.339 08:15:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.339 08:15:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:45.339 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:45.339 08:15:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.339 08:15:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:45.339 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:45.339 08:15:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.339 08:15:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.339 08:15:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.339 08:15:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:45.339 Found net devices under 0000:af:00.0: cvl_0_0 00:14:45.339 08:15:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.339 08:15:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.339 08:15:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.339 08:15:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.339 08:15:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:45.339 Found net devices under 0000:af:00.1: cvl_0_1 00:14:45.339 08:15:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.339 08:15:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:45.339 08:15:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:45.339 08:15:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:45.339 08:15:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.339 08:15:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.339 08:15:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.339 08:15:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:45.339 08:15:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.339 08:15:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.339 08:15:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:45.339 08:15:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.339 08:15:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.340 08:15:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:45.340 08:15:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:45.340 08:15:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.340 08:15:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.340 08:15:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.340 08:15:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.340 08:15:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:45.340 08:15:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.340 08:15:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.340 08:15:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.340 08:15:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:45.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:14:45.340 00:14:45.340 --- 10.0.0.2 ping statistics --- 00:14:45.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.340 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:45.340 08:15:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:14:45.340 00:14:45.340 --- 10.0.0.1 ping statistics --- 00:14:45.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.340 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:45.340 08:15:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.340 08:15:18 -- nvmf/common.sh@410 -- # return 0 00:14:45.340 08:15:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.340 08:15:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.340 08:15:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.340 08:15:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.340 08:15:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.340 08:15:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.340 08:15:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:45.340 08:15:18 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:45.340 08:15:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.340 08:15:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.340 08:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:45.340 08:15:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.340 08:15:18 -- nvmf/common.sh@469 -- # nvmfpid=2213871 00:14:45.340 08:15:18 -- nvmf/common.sh@470 -- # waitforlisten 2213871 00:14:45.340 08:15:18 -- common/autotest_common.sh@817 -- # '[' -z 2213871 ']' 00:14:45.340 08:15:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.340 08:15:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.340 08:15:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.340 08:15:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.340 08:15:18 -- common/autotest_common.sh@10 -- # set +x 00:14:45.340 [2024-02-13 08:15:18.288143] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:45.340 [2024-02-13 08:15:18.288188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.340 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.340 [2024-02-13 08:15:18.348132] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.340 [2024-02-13 08:15:18.426707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.340 [2024-02-13 08:15:18.426810] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.340 [2024-02-13 08:15:18.426818] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.340 [2024-02-13 08:15:18.426824] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.340 [2024-02-13 08:15:18.426875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.340 [2024-02-13 08:15:18.426890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.340 [2024-02-13 08:15:18.426990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.340 [2024-02-13 08:15:18.426991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.600 08:15:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.600 08:15:19 -- common/autotest_common.sh@850 -- # return 0 00:14:45.600 08:15:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:45.600 08:15:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 08:15:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.600 08:15:19 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 [2024-02-13 08:15:19.131844] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 Malloc0 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 Malloc1 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 [2024-02-13 08:15:19.208914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.600 08:15:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.600 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.600 08:15:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.600 08:15:19 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:45.859 00:14:45.859 Discovery Log Number of Records 2, Generation counter 2 00:14:45.859 =====Discovery Log Entry 0====== 00:14:45.859 trtype: tcp 00:14:45.859 adrfam: ipv4 00:14:45.859 subtype: current discovery subsystem 00:14:45.859 treq: not required 00:14:45.859 portid: 0 00:14:45.859 trsvcid: 4420 00:14:45.859 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:45.859 traddr: 10.0.0.2 00:14:45.859 eflags: explicit discovery connections, duplicate discovery information 00:14:45.859 sectype: none 00:14:45.859 =====Discovery Log Entry 1====== 00:14:45.859 trtype: tcp 00:14:45.859 adrfam: ipv4 00:14:45.859 subtype: nvme subsystem 00:14:45.859 treq: not required 00:14:45.859 portid: 0 00:14:45.859 trsvcid: 4420 00:14:45.859 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:45.859 traddr: 10.0.0.2 00:14:45.859 eflags: none 00:14:45.859 sectype: none 00:14:45.859 08:15:19 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:45.859 08:15:19 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:45.859 08:15:19 -- nvmf/common.sh@510 -- # local dev _ 00:14:45.859 08:15:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:45.859 08:15:19 -- nvmf/common.sh@509 -- # nvme list 00:14:45.859 08:15:19 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:45.859 08:15:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:45.859 08:15:19 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.859 08:15:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:45.859 08:15:19 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:45.859 08:15:19 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:45.859 08:15:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:45.859 08:15:19 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:45.859 08:15:19 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:45.859 08:15:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:45.859 08:15:19 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:45.859 08:15:19 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:47.235 08:15:20 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:47.235 08:15:20 -- common/autotest_common.sh@1175 -- # local i=0 00:14:47.235 08:15:20 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.235 08:15:20 -- common/autotest_common.sh@1177 -- # [[ -n 2 ]] 00:14:47.235 08:15:20 -- common/autotest_common.sh@1178 -- # nvme_device_counter=2 00:14:47.235 08:15:20 -- common/autotest_common.sh@1182 -- # sleep 2 00:14:49.175 08:15:22 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:14:49.175 08:15:22 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:14:49.175 08:15:22 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.175 08:15:22 -- common/autotest_common.sh@1184 -- # nvme_devices=2 00:14:49.175 08:15:22 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.175 08:15:22 -- common/autotest_common.sh@1185 -- # return 0 00:14:49.175 08:15:22 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:49.175 08:15:22 -- nvmf/common.sh@510 -- # local dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@509 -- # nvme list 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme1n2 00:14:49.175 /dev/nvme1n1 00:14:49.175 /dev/nvme0n2 00:14:49.175 /dev/nvme0n1 ]] 00:14:49.175 08:15:22 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:49.175 08:15:22 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:49.175 08:15:22 -- nvmf/common.sh@510 -- # local dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@509 -- # nvme list 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme1n2 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme1n1 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:49.175 08:15:22 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:49.175 08:15:22 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:49.175 08:15:22 -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:49.175 08:15:22 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.175 08:15:22 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:49.175 08:15:22 -- common/autotest_common.sh@1196 -- # local i=0 00:14:49.175 08:15:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:14:49.175 08:15:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.175 08:15:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:49.175 08:15:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:49.175 08:15:22 -- common/autotest_common.sh@1208 -- # return 0 00:14:49.175 08:15:22 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:49.175 08:15:22 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.175 08:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.175 08:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:49.175 08:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.175 08:15:22 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:49.175 08:15:22 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:49.175 08:15:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:49.175 08:15:22 -- nvmf/common.sh@116 -- # sync 00:14:49.175 08:15:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:49.175 08:15:22 -- nvmf/common.sh@119 -- # set +e 00:14:49.176 08:15:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:49.176 08:15:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:49.176 rmmod nvme_tcp 00:14:49.176 rmmod nvme_fabrics 00:14:49.435 rmmod nvme_keyring 00:14:49.435 08:15:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:49.435 08:15:22 -- nvmf/common.sh@123 -- # set -e 00:14:49.435 08:15:22 -- nvmf/common.sh@124 -- # return 0 00:14:49.435 08:15:22 -- nvmf/common.sh@477 -- # '[' -n 2213871 ']' 00:14:49.435 08:15:22 -- nvmf/common.sh@478 -- # killprocess 2213871 00:14:49.435 08:15:22 -- common/autotest_common.sh@924 -- # '[' -z 2213871 ']' 00:14:49.435 08:15:22 -- common/autotest_common.sh@928 -- # kill -0 2213871 00:14:49.435 08:15:22 -- common/autotest_common.sh@929 -- # uname 00:14:49.435 08:15:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:49.435 08:15:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2213871 00:14:49.435 08:15:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:49.435 08:15:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:49.435 08:15:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2213871' 00:14:49.435 killing process with pid 2213871 00:14:49.435 08:15:22 -- common/autotest_common.sh@943 -- # kill 2213871 00:14:49.435 08:15:22 -- common/autotest_common.sh@948 -- # wait 2213871 00:14:49.694 08:15:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:49.694 08:15:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:49.694 08:15:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:49.694 08:15:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.694 08:15:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:49.694 08:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.694 08:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.694 08:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.600 08:15:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:51.600 00:14:51.600 real 0m13.086s 00:14:51.600 user 0m20.614s 00:14:51.600 sys 0m5.017s 00:14:51.600 08:15:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.600 08:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:51.600 ************************************ 00:14:51.600 END TEST nvmf_nvme_cli 00:14:51.600 ************************************ 00:14:51.600 08:15:25 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:51.600 08:15:25 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:51.600 08:15:25 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:51.600 08:15:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:51.600 08:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:51.859 ************************************ 00:14:51.859 START TEST nvmf_host_management 00:14:51.859 ************************************ 00:14:51.859 08:15:25 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:51.859 * Looking for test storage... 00:14:51.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.859 08:15:25 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.859 08:15:25 -- nvmf/common.sh@7 -- # uname -s 00:14:51.859 08:15:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.859 08:15:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.859 08:15:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.859 08:15:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.859 08:15:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.859 08:15:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.859 08:15:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.859 08:15:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.859 08:15:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.860 08:15:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.860 08:15:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:51.860 08:15:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:51.860 08:15:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.860 08:15:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.860 08:15:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.860 08:15:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.860 08:15:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.860 08:15:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.860 08:15:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.860 08:15:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.860 08:15:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.860 08:15:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.860 08:15:25 -- paths/export.sh@5 -- # export PATH 00:14:51.860 08:15:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.860 08:15:25 -- nvmf/common.sh@46 -- # : 0 00:14:51.860 08:15:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.860 08:15:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.860 08:15:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.860 08:15:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.860 08:15:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.860 08:15:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.860 08:15:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.860 08:15:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.860 08:15:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.860 08:15:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.860 08:15:25 -- target/host_management.sh@104 -- # nvmftestinit 00:14:51.860 08:15:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.860 08:15:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.860 08:15:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.860 08:15:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.860 08:15:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.860 08:15:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.860 08:15:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.860 08:15:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.860 08:15:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:51.860 08:15:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:51.860 08:15:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:51.860 08:15:25 -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 08:15:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.133 08:15:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.133 08:15:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.133 08:15:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.133 08:15:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.133 08:15:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.133 08:15:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.133 08:15:30 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.133 08:15:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.133 08:15:30 -- nvmf/common.sh@295 -- # e810=() 00:14:57.133 08:15:30 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.133 08:15:30 -- nvmf/common.sh@296 -- # x722=() 00:14:57.133 08:15:30 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.133 08:15:30 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.133 08:15:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.133 08:15:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.133 08:15:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.133 08:15:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:57.133 08:15:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.133 08:15:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.133 08:15:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:57.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:57.133 08:15:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.133 08:15:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:57.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:57.133 08:15:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.133 08:15:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.134 08:15:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.134 08:15:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.134 08:15:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.134 08:15:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.134 08:15:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:57.134 Found net devices under 0000:af:00.0: cvl_0_0 00:14:57.134 08:15:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.134 08:15:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.134 08:15:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.134 08:15:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.134 08:15:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.134 08:15:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:57.134 Found net devices under 0000:af:00.1: cvl_0_1 00:14:57.134 08:15:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.134 08:15:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.134 08:15:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.134 08:15:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:57.134 08:15:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.134 08:15:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.134 08:15:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.134 08:15:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:57.134 08:15:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.134 08:15:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.134 08:15:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:57.134 08:15:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.134 08:15:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.134 08:15:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:57.134 08:15:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:57.134 08:15:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.134 08:15:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.134 08:15:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.134 08:15:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.134 08:15:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:57.134 08:15:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.134 08:15:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.134 08:15:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.134 08:15:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:57.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:14:57.134 00:14:57.134 --- 10.0.0.2 ping statistics --- 00:14:57.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.134 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:14:57.134 08:15:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:14:57.134 00:14:57.134 --- 10.0.0.1 ping statistics --- 00:14:57.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.134 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:14:57.134 08:15:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.134 08:15:30 -- nvmf/common.sh@410 -- # return 0 00:14:57.134 08:15:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.134 08:15:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.134 08:15:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.134 08:15:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.134 08:15:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.134 08:15:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.134 08:15:30 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:57.134 08:15:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:14:57.134 08:15:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:57.134 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 ************************************ 00:14:57.134 START TEST nvmf_host_management 00:14:57.134 ************************************ 00:14:57.134 08:15:30 -- common/autotest_common.sh@1102 -- # nvmf_host_management 00:14:57.134 08:15:30 -- target/host_management.sh@69 -- # starttarget 00:14:57.134 08:15:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:57.134 08:15:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.134 08:15:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.134 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 08:15:30 -- nvmf/common.sh@469 -- # nvmfpid=2218402 00:14:57.134 08:15:30 -- nvmf/common.sh@470 -- # waitforlisten 2218402 00:14:57.134 08:15:30 -- common/autotest_common.sh@817 -- # '[' -z 2218402 ']' 00:14:57.134 08:15:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.134 08:15:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.134 08:15:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.134 08:15:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.134 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 08:15:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:57.394 [2024-02-13 08:15:30.854808] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:57.394 [2024-02-13 08:15:30.854849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.394 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.394 [2024-02-13 08:15:30.916666] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.394 [2024-02-13 08:15:30.991930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.394 [2024-02-13 08:15:30.992034] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.394 [2024-02-13 08:15:30.992041] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.394 [2024-02-13 08:15:30.992047] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.394 [2024-02-13 08:15:30.992082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.394 [2024-02-13 08:15:30.992191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.394 [2024-02-13 08:15:30.992298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.394 [2024-02-13 08:15:30.992299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:58.332 08:15:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.332 08:15:31 -- common/autotest_common.sh@850 -- # return 0 00:14:58.332 08:15:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.332 08:15:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 08:15:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.332 08:15:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.332 08:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 [2024-02-13 08:15:31.691886] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.332 08:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.332 08:15:31 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:58.332 08:15:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 08:15:31 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:58.332 08:15:31 -- target/host_management.sh@23 -- # cat 00:14:58.332 08:15:31 -- target/host_management.sh@30 -- # rpc_cmd 00:14:58.332 08:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 Malloc0 00:14:58.332 [2024-02-13 08:15:31.751371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.332 08:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.332 08:15:31 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:58.332 08:15:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 08:15:31 -- target/host_management.sh@73 -- # perfpid=2218583 00:14:58.332 08:15:31 -- target/host_management.sh@74 -- # waitforlisten 2218583 /var/tmp/bdevperf.sock 00:14:58.332 08:15:31 -- common/autotest_common.sh@817 -- # '[' -z 2218583 ']' 00:14:58.332 08:15:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.332 08:15:31 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:58.332 08:15:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.332 08:15:31 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:58.332 08:15:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.332 08:15:31 -- nvmf/common.sh@520 -- # config=() 00:14:58.332 08:15:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.332 08:15:31 -- nvmf/common.sh@520 -- # local subsystem config 00:14:58.332 08:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 08:15:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:58.332 08:15:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:58.332 { 00:14:58.332 "params": { 00:14:58.332 "name": "Nvme$subsystem", 00:14:58.332 "trtype": "$TEST_TRANSPORT", 00:14:58.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.332 "adrfam": "ipv4", 00:14:58.332 "trsvcid": "$NVMF_PORT", 00:14:58.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.332 "hdgst": ${hdgst:-false}, 00:14:58.332 "ddgst": ${ddgst:-false} 00:14:58.332 }, 00:14:58.332 "method": "bdev_nvme_attach_controller" 00:14:58.332 } 00:14:58.332 EOF 00:14:58.332 )") 00:14:58.332 08:15:31 -- nvmf/common.sh@542 -- # cat 00:14:58.332 08:15:31 -- nvmf/common.sh@544 -- # jq . 00:14:58.332 08:15:31 -- nvmf/common.sh@545 -- # IFS=, 00:14:58.332 08:15:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:58.332 "params": { 00:14:58.332 "name": "Nvme0", 00:14:58.332 "trtype": "tcp", 00:14:58.332 "traddr": "10.0.0.2", 00:14:58.332 "adrfam": "ipv4", 00:14:58.332 "trsvcid": "4420", 00:14:58.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:58.332 "hdgst": false, 00:14:58.332 "ddgst": false 00:14:58.332 }, 00:14:58.332 "method": "bdev_nvme_attach_controller" 00:14:58.332 }' 00:14:58.332 [2024-02-13 08:15:31.840272] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:58.332 [2024-02-13 08:15:31.840320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218583 ] 00:14:58.332 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.332 [2024-02-13 08:15:31.902134] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.332 [2024-02-13 08:15:31.970812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.332 [2024-02-13 08:15:31.970869] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:14:58.901 Running I/O for 10 seconds... 00:14:59.163 08:15:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.163 08:15:32 -- common/autotest_common.sh@850 -- # return 0 00:14:59.163 08:15:32 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:59.163 08:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.163 08:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.163 08:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.163 08:15:32 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.163 08:15:32 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:59.163 08:15:32 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:59.163 08:15:32 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:59.163 08:15:32 -- target/host_management.sh@52 -- # local ret=1 00:14:59.163 08:15:32 -- target/host_management.sh@53 -- # local i 00:14:59.163 08:15:32 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:59.163 08:15:32 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:59.163 08:15:32 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:59.163 08:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.163 08:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.163 08:15:32 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:59.163 08:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.163 08:15:32 -- target/host_management.sh@55 -- # read_io_count=991 00:14:59.163 08:15:32 -- target/host_management.sh@58 -- # '[' 991 -ge 100 ']' 00:14:59.163 08:15:32 -- target/host_management.sh@59 -- # ret=0 00:14:59.163 08:15:32 -- target/host_management.sh@60 -- # break 00:14:59.163 08:15:32 -- target/host_management.sh@64 -- # return 0 00:14:59.163 08:15:32 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:59.163 08:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.163 08:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.163 [2024-02-13 08:15:32.726749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.163 [2024-02-13 08:15:32.726891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.726999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe70290 is same with the state(5) to be set 00:14:59.164 [2024-02-13 08:15:32.727606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.164 [2024-02-13 08:15:32.727938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.164 [2024-02-13 08:15:32.727946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.727952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.727960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.727974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.727980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.727988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.727994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.165 [2024-02-13 08:15:32.728470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.165 [2024-02-13 08:15:32.728478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.166 [2024-02-13 08:15:32.728566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.728573] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f29f0 is same with the state(5) to be set 00:14:59.166 [2024-02-13 08:15:32.728626] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f29f0 was disconnected and freed. reset controller. 00:14:59.166 [2024-02-13 08:15:32.729506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:59.166 task offset: 9472 on job bdev=Nvme0n1 fails 00:14:59.166 00:14:59.166 Latency(us) 00:14:59.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.166 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:59.166 Job: Nvme0n1 ended in about 0.44 seconds with error 00:14:59.166 Verification LBA range: start 0x0 length 0x400 00:14:59.166 Nvme0n1 : 0.44 2438.89 152.43 145.20 0.00 24445.67 1763.23 37948.46 00:14:59.166 =================================================================================================================== 00:14:59.166 Total : 2438.89 152.43 145.20 0.00 24445.67 1763.23 37948.46 00:14:59.166 [2024-02-13 08:15:32.731053] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:59.166 [2024-02-13 08:15:32.731067] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d8630 (9): Bad file descriptor 00:14:59.166 [2024-02-13 08:15:32.731096] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:14:59.166 08:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.166 08:15:32 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:59.166 08:15:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.166 08:15:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.166 [2024-02-13 08:15:32.734238] ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:59.166 [2024-02-13 08:15:32.734345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:59.166 [2024-02-13 08:15:32.734369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.166 [2024-02-13 08:15:32.734383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:59.166 [2024-02-13 08:15:32.734390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:59.166 [2024-02-13 08:15:32.734397] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:59.166 [2024-02-13 08:15:32.734404] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16d8630 00:14:59.166 [2024-02-13 08:15:32.734424] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d8630 (9): Bad file descriptor 00:14:59.166 [2024-02-13 08:15:32.734435] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:59.166 [2024-02-13 08:15:32.734442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:59.166 [2024-02-13 08:15:32.734451] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:59.166 [2024-02-13 08:15:32.734463] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:59.166 08:15:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.166 08:15:32 -- target/host_management.sh@87 -- # sleep 1 00:15:00.105 08:15:33 -- target/host_management.sh@91 -- # kill -9 2218583 00:15:00.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2218583) - No such process 00:15:00.105 08:15:33 -- target/host_management.sh@91 -- # true 00:15:00.105 08:15:33 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:00.105 08:15:33 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:00.105 08:15:33 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:00.105 08:15:33 -- nvmf/common.sh@520 -- # config=() 00:15:00.105 08:15:33 -- nvmf/common.sh@520 -- # local subsystem config 00:15:00.105 08:15:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:00.105 08:15:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:00.105 { 00:15:00.105 "params": { 00:15:00.105 "name": "Nvme$subsystem", 00:15:00.105 "trtype": "$TEST_TRANSPORT", 00:15:00.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.105 "adrfam": "ipv4", 00:15:00.105 "trsvcid": "$NVMF_PORT", 00:15:00.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.105 "hdgst": ${hdgst:-false}, 00:15:00.105 "ddgst": ${ddgst:-false} 00:15:00.105 }, 00:15:00.105 "method": "bdev_nvme_attach_controller" 00:15:00.105 } 00:15:00.105 EOF 00:15:00.105 )") 00:15:00.105 08:15:33 -- nvmf/common.sh@542 -- # cat 00:15:00.105 08:15:33 -- nvmf/common.sh@544 -- # jq . 00:15:00.105 08:15:33 -- nvmf/common.sh@545 -- # IFS=, 00:15:00.105 08:15:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:00.105 "params": { 00:15:00.105 "name": "Nvme0", 00:15:00.105 "trtype": "tcp", 00:15:00.105 "traddr": "10.0.0.2", 00:15:00.105 "adrfam": "ipv4", 00:15:00.105 "trsvcid": "4420", 00:15:00.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:00.105 "hdgst": false, 00:15:00.105 "ddgst": false 00:15:00.105 }, 00:15:00.105 "method": "bdev_nvme_attach_controller" 00:15:00.105 }' 00:15:00.105 [2024-02-13 08:15:33.790134] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:00.105 [2024-02-13 08:15:33.790181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218926 ] 00:15:00.365 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.365 [2024-02-13 08:15:33.849786] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.365 [2024-02-13 08:15:33.916903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.365 [2024-02-13 08:15:33.916959] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:15:00.624 Running I/O for 1 seconds... 00:15:01.562 00:15:01.562 Latency(us) 00:15:01.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.562 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:01.562 Verification LBA range: start 0x0 length 0x400 00:15:01.562 Nvme0n1 : 1.01 3378.62 211.16 0.00 0.00 18682.34 2793.08 38697.45 00:15:01.562 =================================================================================================================== 00:15:01.562 Total : 3378.62 211.16 0.00 0.00 18682.34 2793.08 38697.45 00:15:01.562 [2024-02-13 08:15:35.130878] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:15:01.821 08:15:35 -- target/host_management.sh@101 -- # stoptarget 00:15:01.821 08:15:35 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:01.821 08:15:35 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:01.821 08:15:35 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:01.821 08:15:35 -- target/host_management.sh@40 -- # nvmftestfini 00:15:01.821 08:15:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.821 08:15:35 -- nvmf/common.sh@116 -- # sync 00:15:01.821 08:15:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:01.821 08:15:35 -- nvmf/common.sh@119 -- # set +e 00:15:01.821 08:15:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.822 08:15:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:01.822 rmmod nvme_tcp 00:15:01.822 rmmod nvme_fabrics 00:15:01.822 rmmod nvme_keyring 00:15:01.822 08:15:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.822 08:15:35 -- nvmf/common.sh@123 -- # set -e 00:15:01.822 08:15:35 -- nvmf/common.sh@124 -- # return 0 00:15:01.822 08:15:35 -- nvmf/common.sh@477 -- # '[' -n 2218402 ']' 00:15:01.822 08:15:35 -- nvmf/common.sh@478 -- # killprocess 2218402 00:15:01.822 08:15:35 -- common/autotest_common.sh@924 -- # '[' -z 2218402 ']' 00:15:01.822 08:15:35 -- common/autotest_common.sh@928 -- # kill -0 2218402 00:15:01.822 08:15:35 -- common/autotest_common.sh@929 -- # uname 00:15:01.822 08:15:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:01.822 08:15:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2218402 00:15:01.822 08:15:35 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:01.822 08:15:35 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:01.822 08:15:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2218402' 00:15:01.822 killing process with pid 2218402 00:15:01.822 08:15:35 -- common/autotest_common.sh@943 -- # kill 2218402 00:15:01.822 08:15:35 -- common/autotest_common.sh@948 -- # wait 2218402 00:15:02.081 [2024-02-13 08:15:35.649871] app.c: 603:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:02.081 08:15:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.081 08:15:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.081 08:15:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.081 08:15:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.081 08:15:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.081 08:15:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.081 08:15:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.081 08:15:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.618 08:15:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:04.618 00:15:04.618 real 0m6.930s 00:15:04.618 user 0m21.017s 00:15:04.618 sys 0m1.207s 00:15:04.618 08:15:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:04.618 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.618 ************************************ 00:15:04.618 END TEST nvmf_host_management 00:15:04.618 ************************************ 00:15:04.618 08:15:37 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:04.618 00:15:04.618 real 0m12.479s 00:15:04.618 user 0m22.299s 00:15:04.618 sys 0m5.300s 00:15:04.618 08:15:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:04.618 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.618 ************************************ 00:15:04.618 END TEST nvmf_host_management 00:15:04.618 ************************************ 00:15:04.618 08:15:37 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:04.618 08:15:37 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:04.618 08:15:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:04.618 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.618 ************************************ 00:15:04.618 START TEST nvmf_lvol 00:15:04.618 ************************************ 00:15:04.618 08:15:37 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:04.618 * Looking for test storage... 00:15:04.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.618 08:15:37 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.618 08:15:37 -- nvmf/common.sh@7 -- # uname -s 00:15:04.618 08:15:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.618 08:15:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.618 08:15:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.618 08:15:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.618 08:15:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.618 08:15:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.618 08:15:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.618 08:15:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.618 08:15:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.618 08:15:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.618 08:15:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:04.618 08:15:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:04.618 08:15:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.618 08:15:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.618 08:15:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.618 08:15:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.618 08:15:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.618 08:15:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.618 08:15:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.618 08:15:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.619 08:15:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.619 08:15:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.619 08:15:37 -- paths/export.sh@5 -- # export PATH 00:15:04.619 08:15:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.619 08:15:37 -- nvmf/common.sh@46 -- # : 0 00:15:04.619 08:15:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.619 08:15:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.619 08:15:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.619 08:15:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.619 08:15:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.619 08:15:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.619 08:15:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.619 08:15:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.619 08:15:37 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:04.619 08:15:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:04.619 08:15:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.619 08:15:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.619 08:15:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.619 08:15:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.619 08:15:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.619 08:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.619 08:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.619 08:15:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.619 08:15:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.619 08:15:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.619 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:15:11.186 08:15:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.187 08:15:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:11.187 08:15:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:11.187 08:15:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:11.187 08:15:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:11.187 08:15:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:11.187 08:15:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:11.187 08:15:43 -- nvmf/common.sh@294 -- # net_devs=() 00:15:11.187 08:15:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:11.187 08:15:43 -- nvmf/common.sh@295 -- # e810=() 00:15:11.187 08:15:43 -- nvmf/common.sh@295 -- # local -ga e810 00:15:11.187 08:15:43 -- nvmf/common.sh@296 -- # x722=() 00:15:11.187 08:15:43 -- nvmf/common.sh@296 -- # local -ga x722 00:15:11.187 08:15:43 -- nvmf/common.sh@297 -- # mlx=() 00:15:11.187 08:15:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:11.187 08:15:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.187 08:15:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.187 08:15:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.187 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.187 08:15:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:11.187 08:15:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.187 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.187 08:15:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.187 08:15:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.187 08:15:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.187 08:15:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.187 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.187 08:15:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:11.187 08:15:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.187 08:15:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.187 08:15:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.187 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.187 08:15:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:11.187 08:15:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:11.187 08:15:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.187 08:15:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.187 08:15:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:11.187 08:15:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.187 08:15:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.187 08:15:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:11.187 08:15:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.187 08:15:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.187 08:15:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:11.187 08:15:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:11.187 08:15:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.187 08:15:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.187 08:15:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.187 08:15:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.187 08:15:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:11.187 08:15:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.187 08:15:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.187 08:15:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.187 08:15:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:11.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:15:11.187 00:15:11.187 --- 10.0.0.2 ping statistics --- 00:15:11.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.187 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:11.187 08:15:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:15:11.187 00:15:11.187 --- 10.0.0.1 ping statistics --- 00:15:11.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.187 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:15:11.187 08:15:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.187 08:15:43 -- nvmf/common.sh@410 -- # return 0 00:15:11.187 08:15:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.187 08:15:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.187 08:15:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.187 08:15:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.187 08:15:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.187 08:15:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.187 08:15:43 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:11.187 08:15:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.187 08:15:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.187 08:15:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.187 08:15:43 -- nvmf/common.sh@469 -- # nvmfpid=2222969 00:15:11.187 08:15:43 -- nvmf/common.sh@470 -- # waitforlisten 2222969 00:15:11.187 08:15:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:11.187 08:15:43 -- common/autotest_common.sh@817 -- # '[' -z 2222969 ']' 00:15:11.187 08:15:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.187 08:15:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.187 08:15:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.187 08:15:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.187 08:15:43 -- common/autotest_common.sh@10 -- # set +x 00:15:11.187 [2024-02-13 08:15:43.967174] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:11.187 [2024-02-13 08:15:43.967221] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.187 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.187 [2024-02-13 08:15:44.030402] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.187 [2024-02-13 08:15:44.107176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.187 [2024-02-13 08:15:44.107282] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.187 [2024-02-13 08:15:44.107291] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.187 [2024-02-13 08:15:44.107297] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.187 [2024-02-13 08:15:44.107340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.187 [2024-02-13 08:15:44.107435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.187 [2024-02-13 08:15:44.107436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.187 08:15:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.187 08:15:44 -- common/autotest_common.sh@850 -- # return 0 00:15:11.187 08:15:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.187 08:15:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:11.187 08:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:11.187 08:15:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.187 08:15:44 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.447 [2024-02-13 08:15:44.948030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.447 08:15:44 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:11.706 08:15:45 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:11.706 08:15:45 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:11.706 08:15:45 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:11.706 08:15:45 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:11.965 08:15:45 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:12.224 08:15:45 -- target/nvmf_lvol.sh@29 -- # lvs=a222e4c0-a670-4f31-b0e5-da89010b3f70 00:15:12.224 08:15:45 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a222e4c0-a670-4f31-b0e5-da89010b3f70 lvol 20 00:15:12.224 08:15:45 -- target/nvmf_lvol.sh@32 -- # lvol=c0c4bf48-6a1e-42d9-8da9-7aa82f2b54f9 00:15:12.224 08:15:45 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:12.483 08:15:46 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c0c4bf48-6a1e-42d9-8da9-7aa82f2b54f9 00:15:12.742 08:15:46 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:12.742 [2024-02-13 08:15:46.384389] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.742 08:15:46 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.002 08:15:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=2223466 00:15:13.002 08:15:46 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:13.002 08:15:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:13.002 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.988 08:15:47 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c0c4bf48-6a1e-42d9-8da9-7aa82f2b54f9 MY_SNAPSHOT 00:15:14.248 08:15:47 -- target/nvmf_lvol.sh@47 -- # snapshot=f2541d33-58c2-4d15-9b41-5d737d4f4a21 00:15:14.248 08:15:47 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c0c4bf48-6a1e-42d9-8da9-7aa82f2b54f9 30 00:15:14.507 08:15:47 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f2541d33-58c2-4d15-9b41-5d737d4f4a21 MY_CLONE 00:15:14.507 08:15:48 -- target/nvmf_lvol.sh@49 -- # clone=2f80acc0-6f40-4e4c-ac2a-30fa335e2f84 00:15:14.507 08:15:48 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2f80acc0-6f40-4e4c-ac2a-30fa335e2f84 00:15:15.076 08:15:48 -- target/nvmf_lvol.sh@53 -- # wait 2223466 00:15:25.058 Initializing NVMe Controllers 00:15:25.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:25.058 Controller IO queue size 128, less than required. 00:15:25.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:25.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:25.058 Initialization complete. Launching workers. 00:15:25.058 ======================================================== 00:15:25.058 Latency(us) 00:15:25.058 Device Information : IOPS MiB/s Average min max 00:15:25.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12486.90 48.78 10256.06 1704.64 54814.78 00:15:25.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12299.30 48.04 10412.52 3772.91 65355.88 00:15:25.058 ======================================================== 00:15:25.058 Total : 24786.19 96.82 10333.70 1704.64 65355.88 00:15:25.058 00:15:25.058 08:15:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:25.058 08:15:57 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c0c4bf48-6a1e-42d9-8da9-7aa82f2b54f9 00:15:25.058 08:15:57 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a222e4c0-a670-4f31-b0e5-da89010b3f70 00:15:25.058 08:15:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:25.058 08:15:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:25.058 08:15:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:25.058 08:15:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.058 08:15:57 -- nvmf/common.sh@116 -- # sync 00:15:25.058 08:15:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.058 08:15:57 -- nvmf/common.sh@119 -- # set +e 00:15:25.058 08:15:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.058 08:15:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.058 rmmod nvme_tcp 00:15:25.058 rmmod nvme_fabrics 00:15:25.058 rmmod nvme_keyring 00:15:25.058 08:15:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.058 08:15:57 -- nvmf/common.sh@123 -- # set -e 00:15:25.058 08:15:57 -- nvmf/common.sh@124 -- # return 0 00:15:25.058 08:15:57 -- nvmf/common.sh@477 -- # '[' -n 2222969 ']' 00:15:25.058 08:15:57 -- nvmf/common.sh@478 -- # killprocess 2222969 00:15:25.058 08:15:57 -- common/autotest_common.sh@924 -- # '[' -z 2222969 ']' 00:15:25.058 08:15:57 -- common/autotest_common.sh@928 -- # kill -0 2222969 00:15:25.058 08:15:57 -- common/autotest_common.sh@929 -- # uname 00:15:25.058 08:15:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:25.058 08:15:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2222969 00:15:25.058 08:15:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:25.058 08:15:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:25.058 08:15:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2222969' 00:15:25.058 killing process with pid 2222969 00:15:25.058 08:15:57 -- common/autotest_common.sh@943 -- # kill 2222969 00:15:25.058 08:15:57 -- common/autotest_common.sh@948 -- # wait 2222969 00:15:25.058 08:15:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.058 08:15:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.058 08:15:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.058 08:15:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.058 08:15:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.058 08:15:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.058 08:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.058 08:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.435 08:15:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:26.435 00:15:26.435 real 0m22.145s 00:15:26.435 user 1m3.720s 00:15:26.435 sys 0m7.341s 00:15:26.435 08:15:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:26.435 08:15:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.435 ************************************ 00:15:26.435 END TEST nvmf_lvol 00:15:26.435 ************************************ 00:15:26.435 08:15:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:26.435 08:15:59 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:26.435 08:15:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:26.435 08:15:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.435 ************************************ 00:15:26.435 START TEST nvmf_lvs_grow 00:15:26.435 ************************************ 00:15:26.435 08:15:59 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:26.435 * Looking for test storage... 00:15:26.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.435 08:16:00 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.435 08:16:00 -- nvmf/common.sh@7 -- # uname -s 00:15:26.435 08:16:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.435 08:16:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.435 08:16:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.435 08:16:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.435 08:16:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.435 08:16:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.435 08:16:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.435 08:16:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.435 08:16:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.435 08:16:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.435 08:16:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:26.435 08:16:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:26.435 08:16:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.435 08:16:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.435 08:16:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.435 08:16:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.435 08:16:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.435 08:16:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.435 08:16:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.435 08:16:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.435 08:16:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.435 08:16:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.435 08:16:00 -- paths/export.sh@5 -- # export PATH 00:15:26.435 08:16:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.435 08:16:00 -- nvmf/common.sh@46 -- # : 0 00:15:26.435 08:16:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:26.435 08:16:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:26.435 08:16:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:26.435 08:16:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.435 08:16:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.435 08:16:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:26.436 08:16:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:26.436 08:16:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:26.436 08:16:00 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.436 08:16:00 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.436 08:16:00 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:26.436 08:16:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:26.436 08:16:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.436 08:16:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:26.436 08:16:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:26.436 08:16:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:26.436 08:16:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.436 08:16:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.436 08:16:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.436 08:16:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:26.436 08:16:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:26.436 08:16:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:26.436 08:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:31.708 08:16:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:31.708 08:16:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:31.708 08:16:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:31.708 08:16:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:31.708 08:16:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:31.708 08:16:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:31.708 08:16:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:31.708 08:16:05 -- nvmf/common.sh@294 -- # net_devs=() 00:15:31.708 08:16:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:31.708 08:16:05 -- nvmf/common.sh@295 -- # e810=() 00:15:31.708 08:16:05 -- nvmf/common.sh@295 -- # local -ga e810 00:15:31.708 08:16:05 -- nvmf/common.sh@296 -- # x722=() 00:15:31.708 08:16:05 -- nvmf/common.sh@296 -- # local -ga x722 00:15:31.708 08:16:05 -- nvmf/common.sh@297 -- # mlx=() 00:15:31.708 08:16:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:31.708 08:16:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.708 08:16:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:31.708 08:16:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:31.708 08:16:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:31.708 08:16:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.708 08:16:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:31.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:31.708 08:16:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.708 08:16:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:31.708 08:16:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:31.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:31.709 08:16:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:31.709 08:16:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.709 08:16:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.709 08:16:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.709 08:16:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.709 08:16:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:31.709 Found net devices under 0000:af:00.0: cvl_0_0 00:15:31.709 08:16:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.709 08:16:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:31.709 08:16:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.709 08:16:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:31.709 08:16:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.709 08:16:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:31.709 Found net devices under 0000:af:00.1: cvl_0_1 00:15:31.709 08:16:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.709 08:16:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:31.709 08:16:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:31.709 08:16:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:31.709 08:16:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.709 08:16:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.709 08:16:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.709 08:16:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:31.709 08:16:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.709 08:16:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.709 08:16:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:31.709 08:16:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.709 08:16:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.709 08:16:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:31.709 08:16:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:31.709 08:16:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.709 08:16:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.709 08:16:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.709 08:16:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.709 08:16:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:31.709 08:16:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.709 08:16:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.709 08:16:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.709 08:16:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:31.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:15:31.709 00:15:31.709 --- 10.0.0.2 ping statistics --- 00:15:31.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.709 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:15:31.709 08:16:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:15:31.709 00:15:31.709 --- 10.0.0.1 ping statistics --- 00:15:31.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.709 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:15:31.709 08:16:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.709 08:16:05 -- nvmf/common.sh@410 -- # return 0 00:15:31.709 08:16:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.709 08:16:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.709 08:16:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.709 08:16:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.709 08:16:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.709 08:16:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.969 08:16:05 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:31.969 08:16:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.969 08:16:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:31.969 08:16:05 -- common/autotest_common.sh@10 -- # set +x 00:15:31.969 08:16:05 -- nvmf/common.sh@469 -- # nvmfpid=2229094 00:15:31.969 08:16:05 -- nvmf/common.sh@470 -- # waitforlisten 2229094 00:15:31.969 08:16:05 -- common/autotest_common.sh@817 -- # '[' -z 2229094 ']' 00:15:31.969 08:16:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.969 08:16:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:31.969 08:16:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.969 08:16:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.969 08:16:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:31.969 08:16:05 -- common/autotest_common.sh@10 -- # set +x 00:15:31.969 [2024-02-13 08:16:05.462534] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:31.969 [2024-02-13 08:16:05.462576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.969 [2024-02-13 08:16:05.523390] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.969 [2024-02-13 08:16:05.599052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.969 [2024-02-13 08:16:05.599155] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.969 [2024-02-13 08:16:05.599164] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.969 [2024-02-13 08:16:05.599170] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.969 [2024-02-13 08:16:05.599188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.906 08:16:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:32.906 08:16:06 -- common/autotest_common.sh@850 -- # return 0 00:15:32.906 08:16:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.906 08:16:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:32.906 08:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:32.906 08:16:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.906 [2024-02-13 08:16:06.437995] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:32.906 08:16:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:15:32.906 08:16:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:32.906 08:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:32.906 ************************************ 00:15:32.906 START TEST lvs_grow_clean 00:15:32.906 ************************************ 00:15:32.906 08:16:06 -- common/autotest_common.sh@1102 -- # lvs_grow 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.906 08:16:06 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:33.165 08:16:06 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:33.166 08:16:06 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:33.166 08:16:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:33.166 08:16:06 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:33.166 08:16:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:33.425 08:16:07 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:33.425 08:16:07 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:33.425 08:16:07 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b lvol 150 00:15:33.684 08:16:07 -- target/nvmf_lvs_grow.sh@33 -- # lvol=45f2a0eb-f5dd-4069-ba19-4d71414c026f 00:15:33.684 08:16:07 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:33.684 08:16:07 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:33.684 [2024-02-13 08:16:07.338349] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:33.684 [2024-02-13 08:16:07.338397] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:33.684 true 00:15:33.684 08:16:07 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:33.684 08:16:07 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:33.943 08:16:07 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:33.943 08:16:07 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:34.203 08:16:07 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45f2a0eb-f5dd-4069-ba19-4d71414c026f 00:15:34.203 08:16:07 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:34.462 [2024-02-13 08:16:07.992307] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.462 08:16:08 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:34.722 08:16:08 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2229597 00:15:34.722 08:16:08 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:34.722 08:16:08 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:34.722 08:16:08 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2229597 /var/tmp/bdevperf.sock 00:15:34.722 08:16:08 -- common/autotest_common.sh@817 -- # '[' -z 2229597 ']' 00:15:34.722 08:16:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.722 08:16:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:34.722 08:16:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.722 08:16:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:34.722 08:16:08 -- common/autotest_common.sh@10 -- # set +x 00:15:34.722 [2024-02-13 08:16:08.210416] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:34.722 [2024-02-13 08:16:08.210463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229597 ] 00:15:34.722 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.722 [2024-02-13 08:16:08.270091] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.722 [2024-02-13 08:16:08.344460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.660 08:16:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:35.660 08:16:08 -- common/autotest_common.sh@850 -- # return 0 00:15:35.660 08:16:08 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:35.660 Nvme0n1 00:15:35.919 08:16:09 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:35.919 [ 00:15:35.919 { 00:15:35.919 "name": "Nvme0n1", 00:15:35.919 "aliases": [ 00:15:35.919 "45f2a0eb-f5dd-4069-ba19-4d71414c026f" 00:15:35.919 ], 00:15:35.919 "product_name": "NVMe disk", 00:15:35.919 "block_size": 4096, 00:15:35.919 "num_blocks": 38912, 00:15:35.919 "uuid": "45f2a0eb-f5dd-4069-ba19-4d71414c026f", 00:15:35.919 "assigned_rate_limits": { 00:15:35.919 "rw_ios_per_sec": 0, 00:15:35.919 "rw_mbytes_per_sec": 0, 00:15:35.919 "r_mbytes_per_sec": 0, 00:15:35.919 "w_mbytes_per_sec": 0 00:15:35.919 }, 00:15:35.919 "claimed": false, 00:15:35.919 "zoned": false, 00:15:35.919 "supported_io_types": { 00:15:35.919 "read": true, 00:15:35.920 "write": true, 00:15:35.920 "unmap": true, 00:15:35.920 "write_zeroes": true, 00:15:35.920 "flush": true, 00:15:35.920 "reset": true, 00:15:35.920 "compare": true, 00:15:35.920 "compare_and_write": true, 00:15:35.920 "abort": true, 00:15:35.920 "nvme_admin": true, 00:15:35.920 "nvme_io": true 00:15:35.920 }, 00:15:35.920 "driver_specific": { 00:15:35.920 "nvme": [ 00:15:35.920 { 00:15:35.920 "trid": { 00:15:35.920 "trtype": "TCP", 00:15:35.920 "adrfam": "IPv4", 00:15:35.920 "traddr": "10.0.0.2", 00:15:35.920 "trsvcid": "4420", 00:15:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:35.920 }, 00:15:35.920 "ctrlr_data": { 00:15:35.920 "cntlid": 1, 00:15:35.920 "vendor_id": "0x8086", 00:15:35.920 "model_number": "SPDK bdev Controller", 00:15:35.920 "serial_number": "SPDK0", 00:15:35.920 "firmware_revision": "24.05", 00:15:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:35.920 "oacs": { 00:15:35.920 "security": 0, 00:15:35.920 "format": 0, 00:15:35.920 "firmware": 0, 00:15:35.920 "ns_manage": 0 00:15:35.920 }, 00:15:35.920 "multi_ctrlr": true, 00:15:35.920 "ana_reporting": false 00:15:35.920 }, 00:15:35.920 "vs": { 00:15:35.920 "nvme_version": "1.3" 00:15:35.920 }, 00:15:35.920 "ns_data": { 00:15:35.920 "id": 1, 00:15:35.920 "can_share": true 00:15:35.920 } 00:15:35.920 } 00:15:35.920 ], 00:15:35.920 "mp_policy": "active_passive" 00:15:35.920 } 00:15:35.920 } 00:15:35.920 ] 00:15:35.920 08:16:09 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2229832 00:15:35.920 08:16:09 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:35.920 08:16:09 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.920 Running I/O for 10 seconds... 00:15:37.300 Latency(us) 00:15:37.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.300 Nvme0n1 : 1.00 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:15:37.300 =================================================================================================================== 00:15:37.300 Total : 23584.00 92.12 0.00 0.00 0.00 0.00 0.00 00:15:37.300 00:15:37.869 08:16:11 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:38.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.128 Nvme0n1 : 2.00 23887.50 93.31 0.00 0.00 0.00 0.00 0.00 00:15:38.128 =================================================================================================================== 00:15:38.128 Total : 23887.50 93.31 0.00 0.00 0.00 0.00 0.00 00:15:38.128 00:15:38.128 true 00:15:38.128 08:16:11 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:38.128 08:16:11 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:38.387 08:16:11 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:38.388 08:16:11 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:38.388 08:16:11 -- target/nvmf_lvs_grow.sh@65 -- # wait 2229832 00:15:38.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.956 Nvme0n1 : 3.00 23996.33 93.74 0.00 0.00 0.00 0.00 0.00 00:15:38.956 =================================================================================================================== 00:15:38.956 Total : 23996.33 93.74 0.00 0.00 0.00 0.00 0.00 00:15:38.956 00:15:40.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.334 Nvme0n1 : 4.00 24077.25 94.05 0.00 0.00 0.00 0.00 0.00 00:15:40.334 =================================================================================================================== 00:15:40.334 Total : 24077.25 94.05 0.00 0.00 0.00 0.00 0.00 00:15:40.334 00:15:41.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.302 Nvme0n1 : 5.00 24151.40 94.34 0.00 0.00 0.00 0.00 0.00 00:15:41.302 =================================================================================================================== 00:15:41.302 Total : 24151.40 94.34 0.00 0.00 0.00 0.00 0.00 00:15:41.302 00:15:42.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.241 Nvme0n1 : 6.00 24200.67 94.53 0.00 0.00 0.00 0.00 0.00 00:15:42.241 =================================================================================================================== 00:15:42.241 Total : 24200.67 94.53 0.00 0.00 0.00 0.00 0.00 00:15:42.241 00:15:43.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.179 Nvme0n1 : 7.00 24236.00 94.67 0.00 0.00 0.00 0.00 0.00 00:15:43.179 =================================================================================================================== 00:15:43.179 Total : 24236.00 94.67 0.00 0.00 0.00 0.00 0.00 00:15:43.179 00:15:44.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.117 Nvme0n1 : 8.00 24259.00 94.76 0.00 0.00 0.00 0.00 0.00 00:15:44.117 =================================================================================================================== 00:15:44.117 Total : 24259.00 94.76 0.00 0.00 0.00 0.00 0.00 00:15:44.117 00:15:45.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.056 Nvme0n1 : 9.00 24277.33 94.83 0.00 0.00 0.00 0.00 0.00 00:15:45.056 =================================================================================================================== 00:15:45.056 Total : 24277.33 94.83 0.00 0.00 0.00 0.00 0.00 00:15:45.056 00:15:45.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.994 Nvme0n1 : 10.00 24297.40 94.91 0.00 0.00 0.00 0.00 0.00 00:15:45.994 =================================================================================================================== 00:15:45.994 Total : 24297.40 94.91 0.00 0.00 0.00 0.00 0.00 00:15:45.994 00:15:45.994 00:15:45.994 Latency(us) 00:15:45.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.994 Nvme0n1 : 10.00 24298.60 94.92 0.00 0.00 5264.49 3183.18 17850.76 00:15:45.994 =================================================================================================================== 00:15:45.994 Total : 24298.60 94.92 0.00 0.00 5264.49 3183.18 17850.76 00:15:45.994 0 00:15:45.994 08:16:19 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2229597 00:15:45.994 08:16:19 -- common/autotest_common.sh@924 -- # '[' -z 2229597 ']' 00:15:45.994 08:16:19 -- common/autotest_common.sh@928 -- # kill -0 2229597 00:15:45.994 08:16:19 -- common/autotest_common.sh@929 -- # uname 00:15:45.994 08:16:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:45.994 08:16:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2229597 00:15:46.254 08:16:19 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:46.254 08:16:19 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:46.254 08:16:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2229597' 00:15:46.254 killing process with pid 2229597 00:15:46.254 08:16:19 -- common/autotest_common.sh@943 -- # kill 2229597 00:15:46.254 Received shutdown signal, test time was about 10.000000 seconds 00:15:46.254 00:15:46.254 Latency(us) 00:15:46.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.254 =================================================================================================================== 00:15:46.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.254 08:16:19 -- common/autotest_common.sh@948 -- # wait 2229597 00:15:46.254 08:16:19 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:46.513 08:16:20 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:46.513 08:16:20 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:46.772 08:16:20 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:46.772 08:16:20 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:46.772 08:16:20 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:46.772 [2024-02-13 08:16:20.422191] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:47.031 08:16:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:47.031 08:16:20 -- common/autotest_common.sh@638 -- # local es=0 00:15:47.031 08:16:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:47.031 08:16:20 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.031 08:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.031 08:16:20 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.031 08:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.031 08:16:20 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.031 08:16:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:47.031 08:16:20 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.031 08:16:20 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:47.031 08:16:20 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:47.031 request: 00:15:47.031 { 00:15:47.031 "uuid": "91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b", 00:15:47.031 "method": "bdev_lvol_get_lvstores", 00:15:47.031 "req_id": 1 00:15:47.031 } 00:15:47.031 Got JSON-RPC error response 00:15:47.031 response: 00:15:47.031 { 00:15:47.031 "code": -19, 00:15:47.031 "message": "No such device" 00:15:47.031 } 00:15:47.031 08:16:20 -- common/autotest_common.sh@641 -- # es=1 00:15:47.031 08:16:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:47.031 08:16:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:47.031 08:16:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:47.031 08:16:20 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:47.290 aio_bdev 00:15:47.290 08:16:20 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 45f2a0eb-f5dd-4069-ba19-4d71414c026f 00:15:47.290 08:16:20 -- common/autotest_common.sh@885 -- # local bdev_name=45f2a0eb-f5dd-4069-ba19-4d71414c026f 00:15:47.290 08:16:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:47.290 08:16:20 -- common/autotest_common.sh@887 -- # local i 00:15:47.290 08:16:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:47.290 08:16:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:47.290 08:16:20 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:47.290 08:16:20 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 45f2a0eb-f5dd-4069-ba19-4d71414c026f -t 2000 00:15:47.548 [ 00:15:47.549 { 00:15:47.549 "name": "45f2a0eb-f5dd-4069-ba19-4d71414c026f", 00:15:47.549 "aliases": [ 00:15:47.549 "lvs/lvol" 00:15:47.549 ], 00:15:47.549 "product_name": "Logical Volume", 00:15:47.549 "block_size": 4096, 00:15:47.549 "num_blocks": 38912, 00:15:47.549 "uuid": "45f2a0eb-f5dd-4069-ba19-4d71414c026f", 00:15:47.549 "assigned_rate_limits": { 00:15:47.549 "rw_ios_per_sec": 0, 00:15:47.549 "rw_mbytes_per_sec": 0, 00:15:47.549 "r_mbytes_per_sec": 0, 00:15:47.549 "w_mbytes_per_sec": 0 00:15:47.549 }, 00:15:47.549 "claimed": false, 00:15:47.549 "zoned": false, 00:15:47.549 "supported_io_types": { 00:15:47.549 "read": true, 00:15:47.549 "write": true, 00:15:47.549 "unmap": true, 00:15:47.549 "write_zeroes": true, 00:15:47.549 "flush": false, 00:15:47.549 "reset": true, 00:15:47.549 "compare": false, 00:15:47.549 "compare_and_write": false, 00:15:47.549 "abort": false, 00:15:47.549 "nvme_admin": false, 00:15:47.549 "nvme_io": false 00:15:47.549 }, 00:15:47.549 "driver_specific": { 00:15:47.549 "lvol": { 00:15:47.549 "lvol_store_uuid": "91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b", 00:15:47.549 "base_bdev": "aio_bdev", 00:15:47.549 "thin_provision": false, 00:15:47.549 "snapshot": false, 00:15:47.549 "clone": false, 00:15:47.549 "esnap_clone": false 00:15:47.549 } 00:15:47.549 } 00:15:47.549 } 00:15:47.549 ] 00:15:47.549 08:16:21 -- common/autotest_common.sh@893 -- # return 0 00:15:47.549 08:16:21 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:47.549 08:16:21 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:47.808 08:16:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:47.808 08:16:21 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:47.808 08:16:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:47.808 08:16:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:47.808 08:16:21 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 45f2a0eb-f5dd-4069-ba19-4d71414c026f 00:15:48.066 08:16:21 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91b6e8b2-86c6-4dcc-ba7a-0a2807c0120b 00:15:48.325 08:16:21 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:48.325 08:16:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.584 00:15:48.584 real 0m15.556s 00:15:48.584 user 0m15.246s 00:15:48.584 sys 0m1.311s 00:15:48.584 08:16:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:48.584 08:16:22 -- common/autotest_common.sh@10 -- # set +x 00:15:48.584 ************************************ 00:15:48.584 END TEST lvs_grow_clean 00:15:48.584 ************************************ 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:48.584 08:16:22 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:48.584 08:16:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:48.584 08:16:22 -- common/autotest_common.sh@10 -- # set +x 00:15:48.584 ************************************ 00:15:48.584 START TEST lvs_grow_dirty 00:15:48.584 ************************************ 00:15:48.584 08:16:22 -- common/autotest_common.sh@1102 -- # lvs_grow dirty 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:48.584 08:16:22 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:48.843 08:16:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:15:48.843 08:16:22 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:15:48.843 08:16:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 lvol 150 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=43d85a66-29a1-4a06-a9ec-3bf522772375 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:49.101 08:16:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:49.360 [2024-02-13 08:16:22.903307] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:49.360 [2024-02-13 08:16:22.903355] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:49.360 true 00:15:49.360 08:16:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:15:49.360 08:16:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:49.619 08:16:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:49.619 08:16:23 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:49.619 08:16:23 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 43d85a66-29a1-4a06-a9ec-3bf522772375 00:15:49.878 08:16:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:49.878 08:16:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.137 08:16:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2232197 00:15:50.137 08:16:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:50.137 08:16:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:50.137 08:16:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2232197 /var/tmp/bdevperf.sock 00:15:50.137 08:16:23 -- common/autotest_common.sh@817 -- # '[' -z 2232197 ']' 00:15:50.137 08:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.137 08:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.137 08:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.137 08:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.137 08:16:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.137 [2024-02-13 08:16:23.755188] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:50.137 [2024-02-13 08:16:23.755235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232197 ] 00:15:50.137 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.137 [2024-02-13 08:16:23.812398] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.396 [2024-02-13 08:16:23.882070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.965 08:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.965 08:16:24 -- common/autotest_common.sh@850 -- # return 0 00:15:50.965 08:16:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:51.224 Nvme0n1 00:15:51.224 08:16:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:51.484 [ 00:15:51.484 { 00:15:51.484 "name": "Nvme0n1", 00:15:51.484 "aliases": [ 00:15:51.484 "43d85a66-29a1-4a06-a9ec-3bf522772375" 00:15:51.484 ], 00:15:51.484 "product_name": "NVMe disk", 00:15:51.484 "block_size": 4096, 00:15:51.484 "num_blocks": 38912, 00:15:51.484 "uuid": "43d85a66-29a1-4a06-a9ec-3bf522772375", 00:15:51.484 "assigned_rate_limits": { 00:15:51.484 "rw_ios_per_sec": 0, 00:15:51.484 "rw_mbytes_per_sec": 0, 00:15:51.484 "r_mbytes_per_sec": 0, 00:15:51.484 "w_mbytes_per_sec": 0 00:15:51.484 }, 00:15:51.484 "claimed": false, 00:15:51.484 "zoned": false, 00:15:51.484 "supported_io_types": { 00:15:51.484 "read": true, 00:15:51.484 "write": true, 00:15:51.484 "unmap": true, 00:15:51.484 "write_zeroes": true, 00:15:51.484 "flush": true, 00:15:51.484 "reset": true, 00:15:51.484 "compare": true, 00:15:51.484 "compare_and_write": true, 00:15:51.484 "abort": true, 00:15:51.484 "nvme_admin": true, 00:15:51.484 "nvme_io": true 00:15:51.484 }, 00:15:51.484 "driver_specific": { 00:15:51.484 "nvme": [ 00:15:51.484 { 00:15:51.484 "trid": { 00:15:51.484 "trtype": "TCP", 00:15:51.484 "adrfam": "IPv4", 00:15:51.484 "traddr": "10.0.0.2", 00:15:51.484 "trsvcid": "4420", 00:15:51.484 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:51.484 }, 00:15:51.484 "ctrlr_data": { 00:15:51.484 "cntlid": 1, 00:15:51.484 "vendor_id": "0x8086", 00:15:51.484 "model_number": "SPDK bdev Controller", 00:15:51.484 "serial_number": "SPDK0", 00:15:51.484 "firmware_revision": "24.05", 00:15:51.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:51.484 "oacs": { 00:15:51.484 "security": 0, 00:15:51.484 "format": 0, 00:15:51.484 "firmware": 0, 00:15:51.484 "ns_manage": 0 00:15:51.484 }, 00:15:51.484 "multi_ctrlr": true, 00:15:51.484 "ana_reporting": false 00:15:51.484 }, 00:15:51.484 "vs": { 00:15:51.484 "nvme_version": "1.3" 00:15:51.484 }, 00:15:51.484 "ns_data": { 00:15:51.484 "id": 1, 00:15:51.484 "can_share": true 00:15:51.484 } 00:15:51.484 } 00:15:51.484 ], 00:15:51.484 "mp_policy": "active_passive" 00:15:51.484 } 00:15:51.484 } 00:15:51.484 ] 00:15:51.484 08:16:24 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.484 08:16:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2232431 00:15:51.484 08:16:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:51.484 Running I/O for 10 seconds... 00:15:52.423 Latency(us) 00:15:52.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.423 Nvme0n1 : 1.00 23820.00 93.05 0.00 0.00 0.00 0.00 0.00 00:15:52.423 =================================================================================================================== 00:15:52.423 Total : 23820.00 93.05 0.00 0.00 0.00 0.00 0.00 00:15:52.423 00:15:53.361 08:16:26 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:15:53.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.621 Nvme0n1 : 2.00 24070.00 94.02 0.00 0.00 0.00 0.00 0.00 00:15:53.621 =================================================================================================================== 00:15:53.621 Total : 24070.00 94.02 0.00 0.00 0.00 0.00 0.00 00:15:53.621 00:15:53.621 true 00:15:53.621 08:16:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:15:53.621 08:16:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:53.880 08:16:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:53.880 08:16:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:53.880 08:16:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 2232431 00:15:54.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.449 Nvme0n1 : 3.00 24114.33 94.20 0.00 0.00 0.00 0.00 0.00 00:15:54.449 =================================================================================================================== 00:15:54.449 Total : 24114.33 94.20 0.00 0.00 0.00 0.00 0.00 00:15:54.449 00:15:55.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.388 Nvme0n1 : 4.00 24197.75 94.52 0.00 0.00 0.00 0.00 0.00 00:15:55.388 =================================================================================================================== 00:15:55.388 Total : 24197.75 94.52 0.00 0.00 0.00 0.00 0.00 00:15:55.388 00:15:56.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.769 Nvme0n1 : 5.00 24247.60 94.72 0.00 0.00 0.00 0.00 0.00 00:15:56.769 =================================================================================================================== 00:15:56.769 Total : 24247.60 94.72 0.00 0.00 0.00 0.00 0.00 00:15:56.769 00:15:57.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.708 Nvme0n1 : 6.00 24238.33 94.68 0.00 0.00 0.00 0.00 0.00 00:15:57.708 =================================================================================================================== 00:15:57.708 Total : 24238.33 94.68 0.00 0.00 0.00 0.00 0.00 00:15:57.708 00:15:58.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.659 Nvme0n1 : 7.00 24213.43 94.58 0.00 0.00 0.00 0.00 0.00 00:15:58.659 =================================================================================================================== 00:15:58.659 Total : 24213.43 94.58 0.00 0.00 0.00 0.00 0.00 00:15:58.659 00:15:59.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.646 Nvme0n1 : 8.00 24250.75 94.73 0.00 0.00 0.00 0.00 0.00 00:15:59.646 =================================================================================================================== 00:15:59.646 Total : 24250.75 94.73 0.00 0.00 0.00 0.00 0.00 00:15:59.646 00:16:00.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.583 Nvme0n1 : 9.00 24272.78 94.82 0.00 0.00 0.00 0.00 0.00 00:16:00.583 =================================================================================================================== 00:16:00.584 Total : 24272.78 94.82 0.00 0.00 0.00 0.00 0.00 00:16:00.584 00:16:01.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.522 Nvme0n1 : 10.00 24303.10 94.93 0.00 0.00 0.00 0.00 0.00 00:16:01.522 =================================================================================================================== 00:16:01.522 Total : 24303.10 94.93 0.00 0.00 0.00 0.00 0.00 00:16:01.522 00:16:01.522 00:16:01.522 Latency(us) 00:16:01.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.522 Nvme0n1 : 10.01 24303.25 94.93 0.00 0.00 5263.43 2215.74 13856.18 00:16:01.522 =================================================================================================================== 00:16:01.522 Total : 24303.25 94.93 0.00 0.00 5263.43 2215.74 13856.18 00:16:01.522 0 00:16:01.522 08:16:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2232197 00:16:01.522 08:16:35 -- common/autotest_common.sh@924 -- # '[' -z 2232197 ']' 00:16:01.522 08:16:35 -- common/autotest_common.sh@928 -- # kill -0 2232197 00:16:01.522 08:16:35 -- common/autotest_common.sh@929 -- # uname 00:16:01.522 08:16:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:01.522 08:16:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2232197 00:16:01.522 08:16:35 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:16:01.522 08:16:35 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:16:01.522 08:16:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2232197' 00:16:01.522 killing process with pid 2232197 00:16:01.522 08:16:35 -- common/autotest_common.sh@943 -- # kill 2232197 00:16:01.522 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.522 00:16:01.522 Latency(us) 00:16:01.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.522 =================================================================================================================== 00:16:01.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.522 08:16:35 -- common/autotest_common.sh@948 -- # wait 2232197 00:16:01.782 08:16:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2229094 00:16:02.041 08:16:35 -- target/nvmf_lvs_grow.sh@74 -- # wait 2229094 00:16:02.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2229094 Killed "${NVMF_APP[@]}" "$@" 00:16:02.300 08:16:35 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:02.300 08:16:35 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:02.300 08:16:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:02.300 08:16:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:02.300 08:16:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.300 08:16:35 -- nvmf/common.sh@469 -- # nvmfpid=2234142 00:16:02.300 08:16:35 -- nvmf/common.sh@470 -- # waitforlisten 2234142 00:16:02.300 08:16:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:02.301 08:16:35 -- common/autotest_common.sh@817 -- # '[' -z 2234142 ']' 00:16:02.301 08:16:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.301 08:16:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:02.301 08:16:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.301 08:16:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:02.301 08:16:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.301 [2024-02-13 08:16:35.810065] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:02.301 [2024-02-13 08:16:35.810115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.301 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.301 [2024-02-13 08:16:35.874013] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.301 [2024-02-13 08:16:35.949215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:02.301 [2024-02-13 08:16:35.949318] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.301 [2024-02-13 08:16:35.949325] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.301 [2024-02-13 08:16:35.949332] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.301 [2024-02-13 08:16:35.949347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.238 08:16:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:03.238 08:16:36 -- common/autotest_common.sh@850 -- # return 0 00:16:03.238 08:16:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:03.238 08:16:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:03.238 08:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:03.238 08:16:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.238 08:16:36 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:03.238 [2024-02-13 08:16:36.790527] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:03.238 [2024-02-13 08:16:36.790612] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:03.238 [2024-02-13 08:16:36.790636] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:03.238 08:16:36 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:03.238 08:16:36 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 43d85a66-29a1-4a06-a9ec-3bf522772375 00:16:03.238 08:16:36 -- common/autotest_common.sh@885 -- # local bdev_name=43d85a66-29a1-4a06-a9ec-3bf522772375 00:16:03.238 08:16:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:03.238 08:16:36 -- common/autotest_common.sh@887 -- # local i 00:16:03.238 08:16:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:03.238 08:16:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:03.238 08:16:36 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:03.497 08:16:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 43d85a66-29a1-4a06-a9ec-3bf522772375 -t 2000 00:16:03.497 [ 00:16:03.497 { 00:16:03.497 "name": "43d85a66-29a1-4a06-a9ec-3bf522772375", 00:16:03.497 "aliases": [ 00:16:03.497 "lvs/lvol" 00:16:03.497 ], 00:16:03.497 "product_name": "Logical Volume", 00:16:03.497 "block_size": 4096, 00:16:03.497 "num_blocks": 38912, 00:16:03.497 "uuid": "43d85a66-29a1-4a06-a9ec-3bf522772375", 00:16:03.497 "assigned_rate_limits": { 00:16:03.497 "rw_ios_per_sec": 0, 00:16:03.497 "rw_mbytes_per_sec": 0, 00:16:03.497 "r_mbytes_per_sec": 0, 00:16:03.497 "w_mbytes_per_sec": 0 00:16:03.497 }, 00:16:03.497 "claimed": false, 00:16:03.497 "zoned": false, 00:16:03.497 "supported_io_types": { 00:16:03.497 "read": true, 00:16:03.497 "write": true, 00:16:03.497 "unmap": true, 00:16:03.497 "write_zeroes": true, 00:16:03.497 "flush": false, 00:16:03.497 "reset": true, 00:16:03.497 "compare": false, 00:16:03.497 "compare_and_write": false, 00:16:03.497 "abort": false, 00:16:03.497 "nvme_admin": false, 00:16:03.497 "nvme_io": false 00:16:03.497 }, 00:16:03.497 "driver_specific": { 00:16:03.497 "lvol": { 00:16:03.497 "lvol_store_uuid": "f3d164b9-a3a0-425f-a7e1-b9aba76bef66", 00:16:03.497 "base_bdev": "aio_bdev", 00:16:03.497 "thin_provision": false, 00:16:03.497 "snapshot": false, 00:16:03.497 "clone": false, 00:16:03.497 "esnap_clone": false 00:16:03.497 } 00:16:03.497 } 00:16:03.497 } 00:16:03.497 ] 00:16:03.497 08:16:37 -- common/autotest_common.sh@893 -- # return 0 00:16:03.497 08:16:37 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:03.497 08:16:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:03.756 08:16:37 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:03.756 08:16:37 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:03.756 08:16:37 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:04.015 08:16:37 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:04.015 08:16:37 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:04.015 [2024-02-13 08:16:37.611178] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:04.015 08:16:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:04.015 08:16:37 -- common/autotest_common.sh@638 -- # local es=0 00:16:04.015 08:16:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:04.015 08:16:37 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.015 08:16:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.015 08:16:37 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.015 08:16:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.015 08:16:37 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.015 08:16:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.015 08:16:37 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.015 08:16:37 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:04.015 08:16:37 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:04.275 request: 00:16:04.275 { 00:16:04.275 "uuid": "f3d164b9-a3a0-425f-a7e1-b9aba76bef66", 00:16:04.275 "method": "bdev_lvol_get_lvstores", 00:16:04.275 "req_id": 1 00:16:04.275 } 00:16:04.275 Got JSON-RPC error response 00:16:04.275 response: 00:16:04.275 { 00:16:04.275 "code": -19, 00:16:04.275 "message": "No such device" 00:16:04.275 } 00:16:04.275 08:16:37 -- common/autotest_common.sh@641 -- # es=1 00:16:04.275 08:16:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:04.275 08:16:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:04.275 08:16:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:04.275 08:16:37 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:04.534 aio_bdev 00:16:04.534 08:16:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 43d85a66-29a1-4a06-a9ec-3bf522772375 00:16:04.534 08:16:37 -- common/autotest_common.sh@885 -- # local bdev_name=43d85a66-29a1-4a06-a9ec-3bf522772375 00:16:04.534 08:16:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:04.534 08:16:37 -- common/autotest_common.sh@887 -- # local i 00:16:04.534 08:16:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:04.534 08:16:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:04.534 08:16:37 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:04.534 08:16:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 43d85a66-29a1-4a06-a9ec-3bf522772375 -t 2000 00:16:04.794 [ 00:16:04.794 { 00:16:04.794 "name": "43d85a66-29a1-4a06-a9ec-3bf522772375", 00:16:04.794 "aliases": [ 00:16:04.794 "lvs/lvol" 00:16:04.794 ], 00:16:04.794 "product_name": "Logical Volume", 00:16:04.794 "block_size": 4096, 00:16:04.794 "num_blocks": 38912, 00:16:04.794 "uuid": "43d85a66-29a1-4a06-a9ec-3bf522772375", 00:16:04.794 "assigned_rate_limits": { 00:16:04.794 "rw_ios_per_sec": 0, 00:16:04.794 "rw_mbytes_per_sec": 0, 00:16:04.794 "r_mbytes_per_sec": 0, 00:16:04.794 "w_mbytes_per_sec": 0 00:16:04.794 }, 00:16:04.794 "claimed": false, 00:16:04.794 "zoned": false, 00:16:04.794 "supported_io_types": { 00:16:04.794 "read": true, 00:16:04.794 "write": true, 00:16:04.794 "unmap": true, 00:16:04.794 "write_zeroes": true, 00:16:04.794 "flush": false, 00:16:04.794 "reset": true, 00:16:04.794 "compare": false, 00:16:04.794 "compare_and_write": false, 00:16:04.794 "abort": false, 00:16:04.794 "nvme_admin": false, 00:16:04.794 "nvme_io": false 00:16:04.794 }, 00:16:04.794 "driver_specific": { 00:16:04.794 "lvol": { 00:16:04.794 "lvol_store_uuid": "f3d164b9-a3a0-425f-a7e1-b9aba76bef66", 00:16:04.794 "base_bdev": "aio_bdev", 00:16:04.794 "thin_provision": false, 00:16:04.794 "snapshot": false, 00:16:04.794 "clone": false, 00:16:04.794 "esnap_clone": false 00:16:04.794 } 00:16:04.794 } 00:16:04.794 } 00:16:04.794 ] 00:16:04.794 08:16:38 -- common/autotest_common.sh@893 -- # return 0 00:16:04.794 08:16:38 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:04.794 08:16:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:04.794 08:16:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:04.794 08:16:38 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:04.794 08:16:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:05.053 08:16:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:05.053 08:16:38 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43d85a66-29a1-4a06-a9ec-3bf522772375 00:16:05.315 08:16:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3d164b9-a3a0-425f-a7e1-b9aba76bef66 00:16:05.315 08:16:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:05.574 08:16:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:05.574 00:16:05.574 real 0m17.074s 00:16:05.574 user 0m43.890s 00:16:05.574 sys 0m3.778s 00:16:05.574 08:16:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:05.574 08:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:05.574 ************************************ 00:16:05.574 END TEST lvs_grow_dirty 00:16:05.574 ************************************ 00:16:05.574 08:16:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:05.574 08:16:39 -- common/autotest_common.sh@794 -- # type=--id 00:16:05.574 08:16:39 -- common/autotest_common.sh@795 -- # id=0 00:16:05.574 08:16:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:05.574 08:16:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:05.574 08:16:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:05.574 08:16:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:05.574 08:16:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:05.574 08:16:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:05.574 nvmf_trace.0 00:16:05.574 08:16:39 -- common/autotest_common.sh@809 -- # return 0 00:16:05.574 08:16:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:05.574 08:16:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.574 08:16:39 -- nvmf/common.sh@116 -- # sync 00:16:05.574 08:16:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.574 08:16:39 -- nvmf/common.sh@119 -- # set +e 00:16:05.574 08:16:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.574 08:16:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.574 rmmod nvme_tcp 00:16:05.574 rmmod nvme_fabrics 00:16:05.575 rmmod nvme_keyring 00:16:05.575 08:16:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.834 08:16:39 -- nvmf/common.sh@123 -- # set -e 00:16:05.834 08:16:39 -- nvmf/common.sh@124 -- # return 0 00:16:05.834 08:16:39 -- nvmf/common.sh@477 -- # '[' -n 2234142 ']' 00:16:05.834 08:16:39 -- nvmf/common.sh@478 -- # killprocess 2234142 00:16:05.834 08:16:39 -- common/autotest_common.sh@924 -- # '[' -z 2234142 ']' 00:16:05.834 08:16:39 -- common/autotest_common.sh@928 -- # kill -0 2234142 00:16:05.834 08:16:39 -- common/autotest_common.sh@929 -- # uname 00:16:05.834 08:16:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:05.834 08:16:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2234142 00:16:05.834 08:16:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:05.834 08:16:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:05.834 08:16:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2234142' 00:16:05.834 killing process with pid 2234142 00:16:05.834 08:16:39 -- common/autotest_common.sh@943 -- # kill 2234142 00:16:05.834 08:16:39 -- common/autotest_common.sh@948 -- # wait 2234142 00:16:05.834 08:16:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:05.834 08:16:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:05.834 08:16:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:05.835 08:16:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.835 08:16:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:05.835 08:16:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.835 08:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.835 08:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.377 08:16:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:08.377 00:16:08.377 real 0m41.598s 00:16:08.377 user 1m4.468s 00:16:08.377 sys 0m9.480s 00:16:08.377 08:16:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.377 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.377 ************************************ 00:16:08.377 END TEST nvmf_lvs_grow 00:16:08.377 ************************************ 00:16:08.377 08:16:41 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:08.377 08:16:41 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:08.377 08:16:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:08.377 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.377 ************************************ 00:16:08.377 START TEST nvmf_bdev_io_wait 00:16:08.377 ************************************ 00:16:08.377 08:16:41 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:08.377 * Looking for test storage... 00:16:08.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.378 08:16:41 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.378 08:16:41 -- nvmf/common.sh@7 -- # uname -s 00:16:08.378 08:16:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.378 08:16:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.378 08:16:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.378 08:16:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.378 08:16:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.378 08:16:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.378 08:16:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.378 08:16:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.378 08:16:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.378 08:16:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.378 08:16:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:08.378 08:16:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:08.378 08:16:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.378 08:16:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.378 08:16:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.378 08:16:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.378 08:16:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.378 08:16:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.378 08:16:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.378 08:16:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.378 08:16:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.378 08:16:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.378 08:16:41 -- paths/export.sh@5 -- # export PATH 00:16:08.378 08:16:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.378 08:16:41 -- nvmf/common.sh@46 -- # : 0 00:16:08.378 08:16:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:08.378 08:16:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:08.378 08:16:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:08.378 08:16:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.378 08:16:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.378 08:16:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:08.378 08:16:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:08.378 08:16:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:08.378 08:16:41 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.378 08:16:41 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.378 08:16:41 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:08.378 08:16:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:08.378 08:16:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.378 08:16:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:08.378 08:16:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:08.378 08:16:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:08.378 08:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.378 08:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.378 08:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.378 08:16:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:08.378 08:16:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:08.378 08:16:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:08.378 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:14.948 08:16:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:14.948 08:16:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:14.948 08:16:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:14.948 08:16:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:14.948 08:16:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:14.948 08:16:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:14.948 08:16:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:14.948 08:16:47 -- nvmf/common.sh@294 -- # net_devs=() 00:16:14.948 08:16:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:14.948 08:16:47 -- nvmf/common.sh@295 -- # e810=() 00:16:14.948 08:16:47 -- nvmf/common.sh@295 -- # local -ga e810 00:16:14.948 08:16:47 -- nvmf/common.sh@296 -- # x722=() 00:16:14.948 08:16:47 -- nvmf/common.sh@296 -- # local -ga x722 00:16:14.948 08:16:47 -- nvmf/common.sh@297 -- # mlx=() 00:16:14.948 08:16:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:14.948 08:16:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.948 08:16:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.948 08:16:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:14.948 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:14.948 08:16:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.948 08:16:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:14.948 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:14.948 08:16:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.948 08:16:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.948 08:16:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.948 08:16:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:14.948 Found net devices under 0000:af:00.0: cvl_0_0 00:16:14.948 08:16:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.948 08:16:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.948 08:16:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.948 08:16:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:14.948 Found net devices under 0000:af:00.1: cvl_0_1 00:16:14.948 08:16:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:14.948 08:16:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:14.948 08:16:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.948 08:16:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.948 08:16:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:14.948 08:16:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.948 08:16:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.948 08:16:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:14.948 08:16:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.948 08:16:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.948 08:16:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:14.948 08:16:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:14.948 08:16:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.948 08:16:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.948 08:16:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.948 08:16:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.948 08:16:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:14.948 08:16:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.948 08:16:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.948 08:16:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.948 08:16:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:14.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:16:14.948 00:16:14.948 --- 10.0.0.2 ping statistics --- 00:16:14.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.948 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:14.948 08:16:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:16:14.948 00:16:14.948 --- 10.0.0.1 ping statistics --- 00:16:14.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.948 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:16:14.948 08:16:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.948 08:16:47 -- nvmf/common.sh@410 -- # return 0 00:16:14.948 08:16:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.948 08:16:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.948 08:16:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.948 08:16:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.948 08:16:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.948 08:16:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.948 08:16:47 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:14.948 08:16:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.948 08:16:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:14.948 08:16:47 -- common/autotest_common.sh@10 -- # set +x 00:16:14.948 08:16:47 -- nvmf/common.sh@469 -- # nvmfpid=2238604 00:16:14.948 08:16:47 -- nvmf/common.sh@470 -- # waitforlisten 2238604 00:16:14.948 08:16:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:14.949 08:16:47 -- common/autotest_common.sh@817 -- # '[' -z 2238604 ']' 00:16:14.949 08:16:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.949 08:16:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:14.949 08:16:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.949 08:16:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:14.949 08:16:47 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 [2024-02-13 08:16:47.687879] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:14.949 [2024-02-13 08:16:47.687918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.949 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.949 [2024-02-13 08:16:47.750148] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.949 [2024-02-13 08:16:47.819854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.949 [2024-02-13 08:16:47.819966] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.949 [2024-02-13 08:16:47.819973] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.949 [2024-02-13 08:16:47.819979] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.949 [2024-02-13 08:16:47.820027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.949 [2024-02-13 08:16:47.820123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.949 [2024-02-13 08:16:47.820188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.949 [2024-02-13 08:16:47.820189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.949 08:16:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:14.949 08:16:48 -- common/autotest_common.sh@850 -- # return 0 00:16:14.949 08:16:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.949 08:16:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:14.949 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 08:16:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.949 08:16:48 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:14.949 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.949 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.949 08:16:48 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:14.949 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.949 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.949 08:16:48 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.949 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.949 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 [2024-02-13 08:16:48.597278] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.949 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.949 08:16:48 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.949 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.949 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:14.949 Malloc0 00:16:14.949 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.209 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.209 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.209 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.209 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.209 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.209 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.209 08:16:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.209 08:16:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.209 [2024-02-13 08:16:48.657365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.209 08:16:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2238855 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@30 -- # READ_PID=2238857 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # config=() 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.209 08:16:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.209 { 00:16:15.209 "params": { 00:16:15.209 "name": "Nvme$subsystem", 00:16:15.209 "trtype": "$TEST_TRANSPORT", 00:16:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.209 "adrfam": "ipv4", 00:16:15.209 "trsvcid": "$NVMF_PORT", 00:16:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.209 "hdgst": ${hdgst:-false}, 00:16:15.209 "ddgst": ${ddgst:-false} 00:16:15.209 }, 00:16:15.209 "method": "bdev_nvme_attach_controller" 00:16:15.209 } 00:16:15.209 EOF 00:16:15.209 )") 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2238859 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # config=() 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.209 08:16:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.209 { 00:16:15.209 "params": { 00:16:15.209 "name": "Nvme$subsystem", 00:16:15.209 "trtype": "$TEST_TRANSPORT", 00:16:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.209 "adrfam": "ipv4", 00:16:15.209 "trsvcid": "$NVMF_PORT", 00:16:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.209 "hdgst": ${hdgst:-false}, 00:16:15.209 "ddgst": ${ddgst:-false} 00:16:15.209 }, 00:16:15.209 "method": "bdev_nvme_attach_controller" 00:16:15.209 } 00:16:15.209 EOF 00:16:15.209 )") 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2238862 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # cat 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@35 -- # sync 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # config=() 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.209 08:16:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.209 { 00:16:15.209 "params": { 00:16:15.209 "name": "Nvme$subsystem", 00:16:15.209 "trtype": "$TEST_TRANSPORT", 00:16:15.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.209 "adrfam": "ipv4", 00:16:15.209 "trsvcid": "$NVMF_PORT", 00:16:15.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.209 "hdgst": ${hdgst:-false}, 00:16:15.209 "ddgst": ${ddgst:-false} 00:16:15.209 }, 00:16:15.209 "method": "bdev_nvme_attach_controller" 00:16:15.209 } 00:16:15.209 EOF 00:16:15.209 )") 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:15.209 08:16:48 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # config=() 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # cat 00:16:15.209 08:16:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.209 08:16:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.209 08:16:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.209 { 00:16:15.209 "params": { 00:16:15.209 "name": "Nvme$subsystem", 00:16:15.210 "trtype": "$TEST_TRANSPORT", 00:16:15.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.210 "adrfam": "ipv4", 00:16:15.210 "trsvcid": "$NVMF_PORT", 00:16:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.210 "hdgst": ${hdgst:-false}, 00:16:15.210 "ddgst": ${ddgst:-false} 00:16:15.210 }, 00:16:15.210 "method": "bdev_nvme_attach_controller" 00:16:15.210 } 00:16:15.210 EOF 00:16:15.210 )") 00:16:15.210 08:16:48 -- nvmf/common.sh@542 -- # cat 00:16:15.210 08:16:48 -- target/bdev_io_wait.sh@37 -- # wait 2238855 00:16:15.210 08:16:48 -- nvmf/common.sh@544 -- # jq . 00:16:15.210 08:16:48 -- nvmf/common.sh@542 -- # cat 00:16:15.210 08:16:48 -- nvmf/common.sh@544 -- # jq . 00:16:15.210 08:16:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.210 08:16:48 -- nvmf/common.sh@544 -- # jq . 00:16:15.210 08:16:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.210 "params": { 00:16:15.210 "name": "Nvme1", 00:16:15.210 "trtype": "tcp", 00:16:15.210 "traddr": "10.0.0.2", 00:16:15.210 "adrfam": "ipv4", 00:16:15.210 "trsvcid": "4420", 00:16:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.210 "hdgst": false, 00:16:15.210 "ddgst": false 00:16:15.210 }, 00:16:15.210 "method": "bdev_nvme_attach_controller" 00:16:15.210 }' 00:16:15.210 08:16:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.210 08:16:48 -- nvmf/common.sh@544 -- # jq . 00:16:15.210 08:16:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.210 "params": { 00:16:15.210 "name": "Nvme1", 00:16:15.210 "trtype": "tcp", 00:16:15.210 "traddr": "10.0.0.2", 00:16:15.210 "adrfam": "ipv4", 00:16:15.210 "trsvcid": "4420", 00:16:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.210 "hdgst": false, 00:16:15.210 "ddgst": false 00:16:15.210 }, 00:16:15.210 "method": "bdev_nvme_attach_controller" 00:16:15.210 }' 00:16:15.210 08:16:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.210 08:16:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.210 "params": { 00:16:15.210 "name": "Nvme1", 00:16:15.210 "trtype": "tcp", 00:16:15.210 "traddr": "10.0.0.2", 00:16:15.210 "adrfam": "ipv4", 00:16:15.210 "trsvcid": "4420", 00:16:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.210 "hdgst": false, 00:16:15.210 "ddgst": false 00:16:15.210 }, 00:16:15.210 "method": "bdev_nvme_attach_controller" 00:16:15.210 }' 00:16:15.210 08:16:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.210 08:16:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.210 "params": { 00:16:15.210 "name": "Nvme1", 00:16:15.210 "trtype": "tcp", 00:16:15.210 "traddr": "10.0.0.2", 00:16:15.210 "adrfam": "ipv4", 00:16:15.210 "trsvcid": "4420", 00:16:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.210 "hdgst": false, 00:16:15.210 "ddgst": false 00:16:15.210 }, 00:16:15.210 "method": "bdev_nvme_attach_controller" 00:16:15.210 }' 00:16:15.210 [2024-02-13 08:16:48.705028] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:15.210 [2024-02-13 08:16:48.705074] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:15.210 [2024-02-13 08:16:48.706026] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:15.210 [2024-02-13 08:16:48.706069] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:15.210 [2024-02-13 08:16:48.706674] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:15.210 [2024-02-13 08:16:48.706713] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:15.210 [2024-02-13 08:16:48.707278] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:15.210 [2024-02-13 08:16:48.707318] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:15.210 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.210 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.470 [2024-02-13 08:16:48.902884] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.470 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.470 [2024-02-13 08:16:48.979693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:15.470 [2024-02-13 08:16:48.979752] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:15.470 [2024-02-13 08:16:48.993427] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.470 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.470 [2024-02-13 08:16:49.039244] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.470 [2024-02-13 08:16:49.082694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:15.470 [2024-02-13 08:16:49.082753] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:15.470 [2024-02-13 08:16:49.097401] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.470 [2024-02-13 08:16:49.115097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:15.470 [2024-02-13 08:16:49.115154] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:15.729 [2024-02-13 08:16:49.172259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:15.729 [2024-02-13 08:16:49.172301] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:15.729 Running I/O for 1 seconds... 00:16:15.729 Running I/O for 1 seconds... 00:16:15.729 Running I/O for 1 seconds... 00:16:15.729 Running I/O for 1 seconds... 00:16:16.668 00:16:16.668 Latency(us) 00:16:16.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.668 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:16.668 Nvme1n1 : 1.01 8273.90 32.32 0.00 0.00 15362.25 5617.37 22469.49 00:16:16.668 =================================================================================================================== 00:16:16.668 Total : 8273.90 32.32 0.00 0.00 15362.25 5617.37 22469.49 00:16:16.668 [2024-02-13 08:16:50.215455] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:16.668 00:16:16.668 Latency(us) 00:16:16.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.668 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:16.668 Nvme1n1 : 1.00 8168.51 31.91 0.00 0.00 15627.67 4930.80 24092.28 00:16:16.668 =================================================================================================================== 00:16:16.668 Total : 8168.51 31.91 0.00 0.00 15627.67 4930.80 24092.28 00:16:16.668 [2024-02-13 08:16:50.288801] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:16.668 00:16:16.668 Latency(us) 00:16:16.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.668 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:16.668 Nvme1n1 : 1.00 256818.88 1003.20 0.00 0.00 496.54 199.92 1224.90 00:16:16.668 =================================================================================================================== 00:16:16.668 Total : 256818.88 1003.20 0.00 0.00 496.54 199.92 1224.90 00:16:16.668 [2024-02-13 08:16:50.304021] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:16.668 00:16:16.668 Latency(us) 00:16:16.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.668 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:16.668 Nvme1n1 : 1.01 12476.99 48.74 0.00 0.00 10226.45 5679.79 22843.98 00:16:16.668 =================================================================================================================== 00:16:16.668 Total : 12476.99 48.74 0.00 0.00 10226.45 5679.79 22843.98 00:16:16.668 [2024-02-13 08:16:50.338811] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:16:16.928 08:16:50 -- target/bdev_io_wait.sh@38 -- # wait 2238857 00:16:16.928 08:16:50 -- target/bdev_io_wait.sh@39 -- # wait 2238859 00:16:16.928 08:16:50 -- target/bdev_io_wait.sh@40 -- # wait 2238862 00:16:16.928 08:16:50 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.928 08:16:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.928 08:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:17.188 08:16:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.188 08:16:50 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:17.188 08:16:50 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:17.188 08:16:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.188 08:16:50 -- nvmf/common.sh@116 -- # sync 00:16:17.188 08:16:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.188 08:16:50 -- nvmf/common.sh@119 -- # set +e 00:16:17.188 08:16:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.188 08:16:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.188 rmmod nvme_tcp 00:16:17.188 rmmod nvme_fabrics 00:16:17.188 rmmod nvme_keyring 00:16:17.188 08:16:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.188 08:16:50 -- nvmf/common.sh@123 -- # set -e 00:16:17.188 08:16:50 -- nvmf/common.sh@124 -- # return 0 00:16:17.188 08:16:50 -- nvmf/common.sh@477 -- # '[' -n 2238604 ']' 00:16:17.188 08:16:50 -- nvmf/common.sh@478 -- # killprocess 2238604 00:16:17.188 08:16:50 -- common/autotest_common.sh@924 -- # '[' -z 2238604 ']' 00:16:17.188 08:16:50 -- common/autotest_common.sh@928 -- # kill -0 2238604 00:16:17.188 08:16:50 -- common/autotest_common.sh@929 -- # uname 00:16:17.188 08:16:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:17.188 08:16:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2238604 00:16:17.188 08:16:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:17.188 08:16:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:17.188 08:16:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2238604' 00:16:17.188 killing process with pid 2238604 00:16:17.188 08:16:50 -- common/autotest_common.sh@943 -- # kill 2238604 00:16:17.188 08:16:50 -- common/autotest_common.sh@948 -- # wait 2238604 00:16:17.449 08:16:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:17.449 08:16:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:17.449 08:16:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:17.449 08:16:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.449 08:16:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:17.449 08:16:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.449 08:16:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.449 08:16:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.357 08:16:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:19.357 00:16:19.357 real 0m11.366s 00:16:19.357 user 0m19.613s 00:16:19.357 sys 0m5.936s 00:16:19.357 08:16:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.357 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:19.357 ************************************ 00:16:19.357 END TEST nvmf_bdev_io_wait 00:16:19.357 ************************************ 00:16:19.357 08:16:53 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:19.357 08:16:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:19.357 08:16:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:19.357 08:16:53 -- common/autotest_common.sh@10 -- # set +x 00:16:19.357 ************************************ 00:16:19.357 START TEST nvmf_queue_depth 00:16:19.357 ************************************ 00:16:19.357 08:16:53 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:19.616 * Looking for test storage... 00:16:19.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.616 08:16:53 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.616 08:16:53 -- nvmf/common.sh@7 -- # uname -s 00:16:19.616 08:16:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.616 08:16:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.616 08:16:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.616 08:16:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.616 08:16:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.616 08:16:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.616 08:16:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.616 08:16:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.616 08:16:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.616 08:16:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.616 08:16:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:19.616 08:16:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:19.616 08:16:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.616 08:16:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.616 08:16:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.616 08:16:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.616 08:16:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.616 08:16:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.616 08:16:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.616 08:16:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.616 08:16:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.616 08:16:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.616 08:16:53 -- paths/export.sh@5 -- # export PATH 00:16:19.616 08:16:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.616 08:16:53 -- nvmf/common.sh@46 -- # : 0 00:16:19.616 08:16:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:19.616 08:16:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:19.616 08:16:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:19.616 08:16:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.616 08:16:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.617 08:16:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:19.617 08:16:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:19.617 08:16:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:19.617 08:16:53 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:19.617 08:16:53 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:19.617 08:16:53 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.617 08:16:53 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:19.617 08:16:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:19.617 08:16:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.617 08:16:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:19.617 08:16:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:19.617 08:16:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:19.617 08:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.617 08:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.617 08:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.617 08:16:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:19.617 08:16:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:19.617 08:16:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:19.617 08:16:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.915 08:16:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:24.915 08:16:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:24.915 08:16:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:24.915 08:16:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:24.915 08:16:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:24.915 08:16:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:24.915 08:16:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:24.915 08:16:58 -- nvmf/common.sh@294 -- # net_devs=() 00:16:24.915 08:16:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:24.915 08:16:58 -- nvmf/common.sh@295 -- # e810=() 00:16:24.915 08:16:58 -- nvmf/common.sh@295 -- # local -ga e810 00:16:24.915 08:16:58 -- nvmf/common.sh@296 -- # x722=() 00:16:24.915 08:16:58 -- nvmf/common.sh@296 -- # local -ga x722 00:16:24.915 08:16:58 -- nvmf/common.sh@297 -- # mlx=() 00:16:24.915 08:16:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:24.915 08:16:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.915 08:16:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:24.915 08:16:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:24.915 08:16:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.915 08:16:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:24.915 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:24.915 08:16:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:24.915 08:16:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:24.915 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:24.915 08:16:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.915 08:16:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.915 08:16:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.915 08:16:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:24.915 Found net devices under 0000:af:00.0: cvl_0_0 00:16:24.915 08:16:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.915 08:16:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:24.915 08:16:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.915 08:16:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.915 08:16:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:24.915 Found net devices under 0000:af:00.1: cvl_0_1 00:16:24.915 08:16:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.915 08:16:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:24.915 08:16:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:24.915 08:16:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:24.915 08:16:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.915 08:16:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.915 08:16:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.915 08:16:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:24.915 08:16:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.916 08:16:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.916 08:16:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:24.916 08:16:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.916 08:16:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.916 08:16:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:24.916 08:16:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:24.916 08:16:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.916 08:16:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.916 08:16:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.916 08:16:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.201 08:16:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:25.201 08:16:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.201 08:16:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.201 08:16:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.201 08:16:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:25.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:16:25.201 00:16:25.201 --- 10.0.0.2 ping statistics --- 00:16:25.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.201 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:16:25.201 08:16:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:16:25.201 00:16:25.201 --- 10.0.0.1 ping statistics --- 00:16:25.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.201 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:16:25.201 08:16:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.201 08:16:58 -- nvmf/common.sh@410 -- # return 0 00:16:25.201 08:16:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:25.201 08:16:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.201 08:16:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:25.201 08:16:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:25.201 08:16:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.201 08:16:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:25.201 08:16:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.201 08:16:58 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:25.201 08:16:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.201 08:16:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:25.201 08:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:25.201 08:16:58 -- nvmf/common.sh@469 -- # nvmfpid=2242906 00:16:25.201 08:16:58 -- nvmf/common.sh@470 -- # waitforlisten 2242906 00:16:25.201 08:16:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.201 08:16:58 -- common/autotest_common.sh@817 -- # '[' -z 2242906 ']' 00:16:25.201 08:16:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.201 08:16:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.201 08:16:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.201 08:16:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.201 08:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:25.201 [2024-02-13 08:16:58.803660] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:25.201 [2024-02-13 08:16:58.803701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.201 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.201 [2024-02-13 08:16:58.864736] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.461 [2024-02-13 08:16:58.941053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.461 [2024-02-13 08:16:58.941152] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.461 [2024-02-13 08:16:58.941160] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.461 [2024-02-13 08:16:58.941166] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.461 [2024-02-13 08:16:58.941186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.030 08:16:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.030 08:16:59 -- common/autotest_common.sh@850 -- # return 0 00:16:26.030 08:16:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:26.030 08:16:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 08:16:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.030 08:16:59 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.030 08:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 [2024-02-13 08:16:59.634153] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.030 08:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.030 08:16:59 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.030 08:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 Malloc0 00:16:26.030 08:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.030 08:16:59 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.030 08:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 08:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.030 08:16:59 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.030 08:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 08:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.030 08:16:59 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.030 08:16:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.030 [2024-02-13 08:16:59.693052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.030 08:16:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.030 08:16:59 -- target/queue_depth.sh@30 -- # bdevperf_pid=2243144 00:16:26.030 08:16:59 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:26.030 08:16:59 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.030 08:16:59 -- target/queue_depth.sh@33 -- # waitforlisten 2243144 /var/tmp/bdevperf.sock 00:16:26.030 08:16:59 -- common/autotest_common.sh@817 -- # '[' -z 2243144 ']' 00:16:26.030 08:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.030 08:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.030 08:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.030 08:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.030 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 [2024-02-13 08:16:59.735099] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:26.289 [2024-02-13 08:16:59.735141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243144 ] 00:16:26.289 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.289 [2024-02-13 08:16:59.794889] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.289 [2024-02-13 08:16:59.871602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.855 08:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.855 08:17:00 -- common/autotest_common.sh@850 -- # return 0 00:16:26.855 08:17:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:26.855 08:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.855 08:17:00 -- common/autotest_common.sh@10 -- # set +x 00:16:27.114 NVMe0n1 00:16:27.114 08:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.114 08:17:00 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:27.114 Running I/O for 10 seconds... 00:16:39.325 00:16:39.325 Latency(us) 00:16:39.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:39.325 Verification LBA range: start 0x0 length 0x4000 00:16:39.325 NVMe0n1 : 10.05 18942.49 73.99 0.00 0.00 53907.44 11234.74 47934.90 00:16:39.325 =================================================================================================================== 00:16:39.325 Total : 18942.49 73.99 0.00 0.00 53907.44 11234.74 47934.90 00:16:39.325 0 00:16:39.325 08:17:10 -- target/queue_depth.sh@39 -- # killprocess 2243144 00:16:39.325 08:17:10 -- common/autotest_common.sh@924 -- # '[' -z 2243144 ']' 00:16:39.325 08:17:10 -- common/autotest_common.sh@928 -- # kill -0 2243144 00:16:39.325 08:17:10 -- common/autotest_common.sh@929 -- # uname 00:16:39.325 08:17:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:39.325 08:17:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2243144 00:16:39.325 08:17:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:39.325 08:17:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:39.325 08:17:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2243144' 00:16:39.325 killing process with pid 2243144 00:16:39.325 08:17:10 -- common/autotest_common.sh@943 -- # kill 2243144 00:16:39.325 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.325 00:16:39.325 Latency(us) 00:16:39.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.325 =================================================================================================================== 00:16:39.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:39.325 08:17:10 -- common/autotest_common.sh@948 -- # wait 2243144 00:16:39.325 08:17:11 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:39.325 08:17:11 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:39.325 08:17:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:39.325 08:17:11 -- nvmf/common.sh@116 -- # sync 00:16:39.325 08:17:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:39.325 08:17:11 -- nvmf/common.sh@119 -- # set +e 00:16:39.325 08:17:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:39.325 08:17:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:39.325 rmmod nvme_tcp 00:16:39.325 rmmod nvme_fabrics 00:16:39.325 rmmod nvme_keyring 00:16:39.325 08:17:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:39.325 08:17:11 -- nvmf/common.sh@123 -- # set -e 00:16:39.325 08:17:11 -- nvmf/common.sh@124 -- # return 0 00:16:39.325 08:17:11 -- nvmf/common.sh@477 -- # '[' -n 2242906 ']' 00:16:39.325 08:17:11 -- nvmf/common.sh@478 -- # killprocess 2242906 00:16:39.325 08:17:11 -- common/autotest_common.sh@924 -- # '[' -z 2242906 ']' 00:16:39.325 08:17:11 -- common/autotest_common.sh@928 -- # kill -0 2242906 00:16:39.325 08:17:11 -- common/autotest_common.sh@929 -- # uname 00:16:39.325 08:17:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:39.325 08:17:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2242906 00:16:39.325 08:17:11 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:16:39.325 08:17:11 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:16:39.325 08:17:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2242906' 00:16:39.325 killing process with pid 2242906 00:16:39.325 08:17:11 -- common/autotest_common.sh@943 -- # kill 2242906 00:16:39.325 08:17:11 -- common/autotest_common.sh@948 -- # wait 2242906 00:16:39.325 08:17:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.325 08:17:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:39.325 08:17:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:39.325 08:17:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.325 08:17:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:39.325 08:17:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.325 08:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.325 08:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.894 08:17:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:39.894 00:16:39.894 real 0m20.436s 00:16:39.894 user 0m24.702s 00:16:39.894 sys 0m5.909s 00:16:39.894 08:17:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:39.894 08:17:13 -- common/autotest_common.sh@10 -- # set +x 00:16:39.894 ************************************ 00:16:39.894 END TEST nvmf_queue_depth 00:16:39.894 ************************************ 00:16:39.894 08:17:13 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:39.894 08:17:13 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:39.894 08:17:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:39.894 08:17:13 -- common/autotest_common.sh@10 -- # set +x 00:16:39.894 ************************************ 00:16:39.894 START TEST nvmf_multipath 00:16:39.894 ************************************ 00:16:39.894 08:17:13 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:39.894 * Looking for test storage... 00:16:40.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.154 08:17:13 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.154 08:17:13 -- nvmf/common.sh@7 -- # uname -s 00:16:40.154 08:17:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.154 08:17:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.154 08:17:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.154 08:17:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.154 08:17:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.154 08:17:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.154 08:17:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.154 08:17:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.154 08:17:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.154 08:17:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.154 08:17:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:40.154 08:17:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:40.154 08:17:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.154 08:17:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.154 08:17:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.154 08:17:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.154 08:17:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.154 08:17:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.154 08:17:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.154 08:17:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.154 08:17:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.154 08:17:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.154 08:17:13 -- paths/export.sh@5 -- # export PATH 00:16:40.154 08:17:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.154 08:17:13 -- nvmf/common.sh@46 -- # : 0 00:16:40.154 08:17:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:40.154 08:17:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:40.154 08:17:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:40.154 08:17:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.155 08:17:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.155 08:17:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:40.155 08:17:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:40.155 08:17:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:40.155 08:17:13 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.155 08:17:13 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.155 08:17:13 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:40.155 08:17:13 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.155 08:17:13 -- target/multipath.sh@43 -- # nvmftestinit 00:16:40.155 08:17:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:40.155 08:17:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.155 08:17:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:40.155 08:17:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:40.155 08:17:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:40.155 08:17:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.155 08:17:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.155 08:17:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.155 08:17:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:40.155 08:17:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:40.155 08:17:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:40.155 08:17:13 -- common/autotest_common.sh@10 -- # set +x 00:16:46.726 08:17:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.726 08:17:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:46.726 08:17:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:46.726 08:17:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:46.726 08:17:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:46.726 08:17:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:46.726 08:17:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:46.726 08:17:19 -- nvmf/common.sh@294 -- # net_devs=() 00:16:46.726 08:17:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:46.726 08:17:19 -- nvmf/common.sh@295 -- # e810=() 00:16:46.726 08:17:19 -- nvmf/common.sh@295 -- # local -ga e810 00:16:46.726 08:17:19 -- nvmf/common.sh@296 -- # x722=() 00:16:46.726 08:17:19 -- nvmf/common.sh@296 -- # local -ga x722 00:16:46.726 08:17:19 -- nvmf/common.sh@297 -- # mlx=() 00:16:46.726 08:17:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:46.726 08:17:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.726 08:17:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.726 08:17:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.726 08:17:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.726 08:17:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.727 08:17:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.727 08:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:46.727 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:46.727 08:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.727 08:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:46.727 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:46.727 08:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.727 08:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.727 08:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.727 08:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:46.727 Found net devices under 0000:af:00.0: cvl_0_0 00:16:46.727 08:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.727 08:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.727 08:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.727 08:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:46.727 Found net devices under 0000:af:00.1: cvl_0_1 00:16:46.727 08:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:46.727 08:17:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:46.727 08:17:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.727 08:17:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.727 08:17:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:46.727 08:17:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.727 08:17:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.727 08:17:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:46.727 08:17:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.727 08:17:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.727 08:17:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:46.727 08:17:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:46.727 08:17:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.727 08:17:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.727 08:17:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.727 08:17:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.727 08:17:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:46.727 08:17:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.727 08:17:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.727 08:17:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.727 08:17:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:46.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:16:46.727 00:16:46.727 --- 10.0.0.2 ping statistics --- 00:16:46.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.727 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:46.727 08:17:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:16:46.727 00:16:46.727 --- 10.0.0.1 ping statistics --- 00:16:46.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.727 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:46.727 08:17:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.727 08:17:19 -- nvmf/common.sh@410 -- # return 0 00:16:46.727 08:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.727 08:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.727 08:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.727 08:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:46.727 08:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:46.727 08:17:19 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:46.727 08:17:19 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:46.727 only one NIC for nvmf test 00:16:46.727 08:17:19 -- target/multipath.sh@47 -- # nvmftestfini 00:16:46.727 08:17:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:46.727 08:17:19 -- nvmf/common.sh@116 -- # sync 00:16:46.727 08:17:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:46.727 08:17:19 -- nvmf/common.sh@119 -- # set +e 00:16:46.727 08:17:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:46.727 08:17:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:46.727 rmmod nvme_tcp 00:16:46.727 rmmod nvme_fabrics 00:16:46.727 rmmod nvme_keyring 00:16:46.727 08:17:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:46.727 08:17:19 -- nvmf/common.sh@123 -- # set -e 00:16:46.727 08:17:19 -- nvmf/common.sh@124 -- # return 0 00:16:46.727 08:17:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:46.727 08:17:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:46.727 08:17:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:46.727 08:17:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.727 08:17:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:46.727 08:17:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.727 08:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.727 08:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.634 08:17:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:48.634 08:17:21 -- target/multipath.sh@48 -- # exit 0 00:16:48.634 08:17:21 -- target/multipath.sh@1 -- # nvmftestfini 00:16:48.634 08:17:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:48.634 08:17:21 -- nvmf/common.sh@116 -- # sync 00:16:48.634 08:17:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:48.634 08:17:21 -- nvmf/common.sh@119 -- # set +e 00:16:48.634 08:17:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:48.634 08:17:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:48.634 08:17:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:48.634 08:17:21 -- nvmf/common.sh@123 -- # set -e 00:16:48.634 08:17:21 -- nvmf/common.sh@124 -- # return 0 00:16:48.634 08:17:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:48.634 08:17:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:48.634 08:17:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:48.634 08:17:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:48.634 08:17:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.634 08:17:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:48.634 08:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.634 08:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.634 08:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.634 08:17:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:48.634 00:16:48.634 real 0m8.386s 00:16:48.634 user 0m1.783s 00:16:48.634 sys 0m4.611s 00:16:48.634 08:17:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:48.634 08:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.634 ************************************ 00:16:48.634 END TEST nvmf_multipath 00:16:48.634 ************************************ 00:16:48.634 08:17:21 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:48.634 08:17:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:48.634 08:17:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:48.634 08:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.634 ************************************ 00:16:48.634 START TEST nvmf_zcopy 00:16:48.634 ************************************ 00:16:48.634 08:17:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:48.634 * Looking for test storage... 00:16:48.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.634 08:17:22 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.634 08:17:22 -- nvmf/common.sh@7 -- # uname -s 00:16:48.634 08:17:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.634 08:17:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.634 08:17:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.634 08:17:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.634 08:17:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.634 08:17:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.634 08:17:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.634 08:17:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.634 08:17:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.634 08:17:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.634 08:17:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:48.634 08:17:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:48.634 08:17:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.634 08:17:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.634 08:17:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.634 08:17:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.634 08:17:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.634 08:17:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.634 08:17:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.634 08:17:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.634 08:17:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.634 08:17:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.634 08:17:22 -- paths/export.sh@5 -- # export PATH 00:16:48.634 08:17:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.635 08:17:22 -- nvmf/common.sh@46 -- # : 0 00:16:48.635 08:17:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.635 08:17:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.635 08:17:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.635 08:17:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.635 08:17:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.635 08:17:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.635 08:17:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.635 08:17:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.635 08:17:22 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:48.635 08:17:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:48.635 08:17:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.635 08:17:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.635 08:17:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.635 08:17:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.635 08:17:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.635 08:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.635 08:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.635 08:17:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:48.635 08:17:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:48.635 08:17:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:48.635 08:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:55.211 08:17:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:55.211 08:17:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:55.211 08:17:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:55.211 08:17:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:55.211 08:17:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:55.211 08:17:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:55.211 08:17:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:55.211 08:17:27 -- nvmf/common.sh@294 -- # net_devs=() 00:16:55.211 08:17:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:55.211 08:17:27 -- nvmf/common.sh@295 -- # e810=() 00:16:55.211 08:17:27 -- nvmf/common.sh@295 -- # local -ga e810 00:16:55.211 08:17:27 -- nvmf/common.sh@296 -- # x722=() 00:16:55.211 08:17:27 -- nvmf/common.sh@296 -- # local -ga x722 00:16:55.211 08:17:27 -- nvmf/common.sh@297 -- # mlx=() 00:16:55.211 08:17:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:55.211 08:17:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.211 08:17:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:55.211 08:17:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:55.211 08:17:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:55.211 08:17:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:55.211 08:17:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:55.211 08:17:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:55.211 08:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:55.211 08:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:55.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:55.211 08:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:55.211 08:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:55.211 08:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:55.212 08:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:55.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:55.212 08:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:55.212 08:17:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:55.212 08:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.212 08:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:55.212 08:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.212 08:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:55.212 Found net devices under 0000:af:00.0: cvl_0_0 00:16:55.212 08:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.212 08:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:55.212 08:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.212 08:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:55.212 08:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.212 08:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:55.212 Found net devices under 0000:af:00.1: cvl_0_1 00:16:55.212 08:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.212 08:17:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:55.212 08:17:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:55.212 08:17:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:55.212 08:17:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.212 08:17:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.212 08:17:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.212 08:17:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:55.212 08:17:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.212 08:17:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.212 08:17:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:55.212 08:17:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.212 08:17:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.212 08:17:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:55.212 08:17:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:55.212 08:17:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.212 08:17:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.212 08:17:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.212 08:17:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.212 08:17:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:55.212 08:17:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.212 08:17:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.212 08:17:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.212 08:17:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:55.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:16:55.212 00:16:55.212 --- 10.0.0.2 ping statistics --- 00:16:55.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.212 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:16:55.212 08:17:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:16:55.212 00:16:55.212 --- 10.0.0.1 ping statistics --- 00:16:55.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.212 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:16:55.212 08:17:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.212 08:17:27 -- nvmf/common.sh@410 -- # return 0 00:16:55.212 08:17:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:55.212 08:17:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.212 08:17:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:55.212 08:17:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.212 08:17:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:55.212 08:17:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:55.212 08:17:28 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:55.212 08:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:55.212 08:17:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 08:17:28 -- nvmf/common.sh@469 -- # nvmfpid=2252563 00:16:55.212 08:17:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:55.212 08:17:28 -- nvmf/common.sh@470 -- # waitforlisten 2252563 00:16:55.212 08:17:28 -- common/autotest_common.sh@817 -- # '[' -z 2252563 ']' 00:16:55.212 08:17:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.212 08:17:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:55.212 08:17:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.212 08:17:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 [2024-02-13 08:17:28.052775] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:55.212 [2024-02-13 08:17:28.052818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.212 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.212 [2024-02-13 08:17:28.113060] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.212 [2024-02-13 08:17:28.187929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:55.212 [2024-02-13 08:17:28.188032] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.212 [2024-02-13 08:17:28.188040] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.212 [2024-02-13 08:17:28.188046] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.212 [2024-02-13 08:17:28.188062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.212 08:17:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:55.212 08:17:28 -- common/autotest_common.sh@850 -- # return 0 00:16:55.212 08:17:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.212 08:17:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 08:17:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.212 08:17:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:55.212 08:17:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:55.212 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 [2024-02-13 08:17:28.881291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.212 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.212 08:17:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:55.212 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.212 08:17:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.212 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.212 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.212 [2024-02-13 08:17:28.897404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.472 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.472 08:17:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:55.472 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.472 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.472 08:17:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:55.472 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.472 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 malloc0 00:16:55.472 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.472 08:17:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:55.472 08:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.472 08:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 08:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.472 08:17:28 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:55.472 08:17:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:55.472 08:17:28 -- nvmf/common.sh@520 -- # config=() 00:16:55.472 08:17:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:55.472 08:17:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:55.472 08:17:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:55.472 { 00:16:55.472 "params": { 00:16:55.472 "name": "Nvme$subsystem", 00:16:55.472 "trtype": "$TEST_TRANSPORT", 00:16:55.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.472 "adrfam": "ipv4", 00:16:55.472 "trsvcid": "$NVMF_PORT", 00:16:55.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.472 "hdgst": ${hdgst:-false}, 00:16:55.472 "ddgst": ${ddgst:-false} 00:16:55.472 }, 00:16:55.472 "method": "bdev_nvme_attach_controller" 00:16:55.472 } 00:16:55.472 EOF 00:16:55.472 )") 00:16:55.472 08:17:28 -- nvmf/common.sh@542 -- # cat 00:16:55.472 08:17:28 -- nvmf/common.sh@544 -- # jq . 00:16:55.472 08:17:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:55.472 08:17:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:55.472 "params": { 00:16:55.472 "name": "Nvme1", 00:16:55.472 "trtype": "tcp", 00:16:55.472 "traddr": "10.0.0.2", 00:16:55.472 "adrfam": "ipv4", 00:16:55.472 "trsvcid": "4420", 00:16:55.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.472 "hdgst": false, 00:16:55.472 "ddgst": false 00:16:55.472 }, 00:16:55.472 "method": "bdev_nvme_attach_controller" 00:16:55.472 }' 00:16:55.472 [2024-02-13 08:17:28.970308] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:55.472 [2024-02-13 08:17:28.970354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252811 ] 00:16:55.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.472 [2024-02-13 08:17:29.030072] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.472 [2024-02-13 08:17:29.099443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.472 [2024-02-13 08:17:29.099496] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:16:55.732 Running I/O for 10 seconds... 00:17:05.747 00:17:05.747 Latency(us) 00:17:05.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.747 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:05.747 Verification LBA range: start 0x0 length 0x1000 00:17:05.747 Nvme1n1 : 10.01 13322.33 104.08 0.00 0.00 9585.47 928.43 25839.91 00:17:05.747 =================================================================================================================== 00:17:05.747 Total : 13322.33 104.08 0.00 0.00 9585.47 928.43 25839.91 00:17:05.747 [2024-02-13 08:17:39.411347] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:06.006 08:17:39 -- target/zcopy.sh@39 -- # perfpid=2254574 00:17:06.007 08:17:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:06.007 08:17:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.007 08:17:39 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:06.007 08:17:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:06.007 08:17:39 -- nvmf/common.sh@520 -- # config=() 00:17:06.007 08:17:39 -- nvmf/common.sh@520 -- # local subsystem config 00:17:06.007 08:17:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:06.007 08:17:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:06.007 { 00:17:06.007 "params": { 00:17:06.007 "name": "Nvme$subsystem", 00:17:06.007 "trtype": "$TEST_TRANSPORT", 00:17:06.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.007 "adrfam": "ipv4", 00:17:06.007 "trsvcid": "$NVMF_PORT", 00:17:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.007 "hdgst": ${hdgst:-false}, 00:17:06.007 "ddgst": ${ddgst:-false} 00:17:06.007 }, 00:17:06.007 "method": "bdev_nvme_attach_controller" 00:17:06.007 } 00:17:06.007 EOF 00:17:06.007 )") 00:17:06.007 [2024-02-13 08:17:39.620437] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.620468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 08:17:39 -- nvmf/common.sh@542 -- # cat 00:17:06.007 08:17:39 -- nvmf/common.sh@544 -- # jq . 00:17:06.007 [2024-02-13 08:17:39.628426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.628438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 08:17:39 -- nvmf/common.sh@545 -- # IFS=, 00:17:06.007 08:17:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:06.007 "params": { 00:17:06.007 "name": "Nvme1", 00:17:06.007 "trtype": "tcp", 00:17:06.007 "traddr": "10.0.0.2", 00:17:06.007 "adrfam": "ipv4", 00:17:06.007 "trsvcid": "4420", 00:17:06.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.007 "hdgst": false, 00:17:06.007 "ddgst": false 00:17:06.007 }, 00:17:06.007 "method": "bdev_nvme_attach_controller" 00:17:06.007 }' 00:17:06.007 [2024-02-13 08:17:39.636444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.636454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.644464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.644474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.652485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.652495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.658065] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:06.007 [2024-02-13 08:17:39.658110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254574 ] 00:17:06.007 [2024-02-13 08:17:39.660505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.660517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.668526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.668536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.676557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.676567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 [2024-02-13 08:17:39.684568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.684578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.007 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.007 [2024-02-13 08:17:39.692591] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.007 [2024-02-13 08:17:39.692602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.700612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.700621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.708633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.708643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.716659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.716669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.719753] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.267 [2024-02-13 08:17:39.724682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.724695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.732700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.732712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.740721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.740731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.748742] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.748751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.756767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.756783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.764788] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.764799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.772807] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.772816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.780830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.780841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.788851] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.788862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.789047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.267 [2024-02-13 08:17:39.789089] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:17:06.267 [2024-02-13 08:17:39.796869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.796880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.804901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.804919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.812915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.812926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.820935] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.820945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.828957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.828967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.836977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.836986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.845000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.845009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.853021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.853031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.861043] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.861052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.869062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.869071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.877114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.877132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.885112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.885125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.893134] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.893147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.901155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.901168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.909174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.909187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.917194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.917203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.925215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.925224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.933241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.933252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.941262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.941271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.267 [2024-02-13 08:17:39.949285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.267 [2024-02-13 08:17:39.949294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.957308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.957322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.965326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.965336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.973349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.973358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.981370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.981379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.989396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.989409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:39.997419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:39.997432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.005442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.005454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.013460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.013470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.021480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.021490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.029502] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.029512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.037525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.037537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.045542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.045552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.054664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.054680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.061590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.061601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 Running I/O for 5 seconds... 00:17:06.527 [2024-02-13 08:17:40.069611] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.069620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.085853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.085872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.095974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.095993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.104447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.104465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.527 [2024-02-13 08:17:40.112627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.527 [2024-02-13 08:17:40.112645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.121029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.121046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.131727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.131744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.142707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.142725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.150915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.150932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.159781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.159799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.168694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.168711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.176877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.176895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.185095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.185114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.193640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.193664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.200357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.200375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.528 [2024-02-13 08:17:40.210696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.528 [2024-02-13 08:17:40.210714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.219063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.219081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.227146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.227164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.236297] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.236315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.244586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.244606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.253847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.253865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.262891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.262909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.272282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.272301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.280966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.280984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.289252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.289271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.296794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.296811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.306736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.306754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.315362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.315380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.324258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.324276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.332991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.333008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.341113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.341131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.350545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.350564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.358913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.358931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.367968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.367991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.376853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.376872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.386213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.386231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.395141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.395159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.403804] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.403824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.413009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.413028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.421738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.421757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.430842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.430860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.439154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.439173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.447834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.447852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.457160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.457177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.465471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.465488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.788 [2024-02-13 08:17:40.473499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:06.788 [2024-02-13 08:17:40.473517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.482478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.482496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.491029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.491047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.500064] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.500082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.508138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.508156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.517191] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.517209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.525555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.525573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.533601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.533623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.542866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.542886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.551379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.551396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.559861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.559879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.568784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.568802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.576707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.576725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.585696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.585713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.594185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.594203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.602808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.602825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.610921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.610939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.620041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.620059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.628923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.628940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.638102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.638120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.645927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.645944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.654548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.654565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.663655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.663673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.672699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.672716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.681891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.681908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.690192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.690210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.698651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.049 [2024-02-13 08:17:40.698673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.049 [2024-02-13 08:17:40.707084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.050 [2024-02-13 08:17:40.707102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.050 [2024-02-13 08:17:40.715639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.050 [2024-02-13 08:17:40.715663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.050 [2024-02-13 08:17:40.724559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.050 [2024-02-13 08:17:40.724577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.050 [2024-02-13 08:17:40.733369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.050 [2024-02-13 08:17:40.733387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.742084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.742102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.750353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.750374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.759641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.759665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.768623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.768641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.777392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.777410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.786064] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.786083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.794949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.794968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.803767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.803785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.812539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.812557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.821673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.821692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.830320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.830339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.838423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.838440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.847565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.847583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.856570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.856589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.865300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.865323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.874119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.874138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.882813] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.882832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.891444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.891463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.900327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.900344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.909041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.909058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.917666] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.917684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.926344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.926362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.935181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.935200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.943919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.943938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.952635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.952660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.961476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.961496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.970304] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.970322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.976919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.976937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.987548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.987566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.310 [2024-02-13 08:17:40.996584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.310 [2024-02-13 08:17:40.996603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.570 [2024-02-13 08:17:41.005285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.570 [2024-02-13 08:17:41.005303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.570 [2024-02-13 08:17:41.013983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.570 [2024-02-13 08:17:41.014002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.570 [2024-02-13 08:17:41.023327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.570 [2024-02-13 08:17:41.023345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.570 [2024-02-13 08:17:41.031815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.570 [2024-02-13 08:17:41.031832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.570 [2024-02-13 08:17:41.040323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.570 [2024-02-13 08:17:41.040341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.049038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.049057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.057030] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.057048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.066020] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.066039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.074496] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.074514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.083180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.083199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.091842] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.091860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.100836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.100855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.109326] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.109344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.118119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.118137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.126980] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.126998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.136100] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.136119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.144409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.144427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.152529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.152546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.160583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.160600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.169262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.169279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.177826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.177843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.185815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.185833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.194424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.194443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.202734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.202751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.211132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.211150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.217709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.217727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.228359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.228377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.236632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.236655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.245341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.245359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.571 [2024-02-13 08:17:41.253877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.571 [2024-02-13 08:17:41.253895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.262716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.262735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.271177] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.271194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.279357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.279375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.288252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.288269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.297152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.297170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.305888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.305905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.314247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.831 [2024-02-13 08:17:41.314265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.831 [2024-02-13 08:17:41.323694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.323712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.332407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.332425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.341033] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.341051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.350079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.350097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.359162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.359180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.365636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.365662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.376122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.376140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.384679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.384697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.393376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.393394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.401870] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.401887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.410200] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.410218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.418282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.418300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.427395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.427413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.435606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.435625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.443685] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.443708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.452086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.452105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.460526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.460543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.469039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.469057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.477164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.477182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.485960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.485978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.494766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.494784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.503260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.503277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:07.832 [2024-02-13 08:17:41.511759] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:07.832 [2024-02-13 08:17:41.511783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.520846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.520864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.530715] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.530732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.538856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.538874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.546153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.546171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.555867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.555885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.563993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.564010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.571982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.571999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.579822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.579839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.589157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.589175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.597630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.597654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.606472] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.606490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.615228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.615245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.623994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.624011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.632706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.632724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.640255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.640273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.649663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.649681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.658542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.658560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.667406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.667424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.675480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.675501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.683367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.683385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.692005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.692022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.700837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.700855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.709758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.709776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.717902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.717920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.726075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.726092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.734096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.734114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.743117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.743134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.751738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.751756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.760283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.760301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.768892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.768909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.092 [2024-02-13 08:17:41.777378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.092 [2024-02-13 08:17:41.777395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.784616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.784633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.794839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.794857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.803461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.803478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.811375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.811393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.819976] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.819994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.828180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.828198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.836884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.836906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.845117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.845135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.852990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.853008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.862666] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.862684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.870811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.870829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.879013] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.879030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.887153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.887171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.895119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.895136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.902191] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.902208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.912958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.912976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.921627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.921645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.929831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.929849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.938980] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.938997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.947121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.947139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.954814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.352 [2024-02-13 08:17:41.954831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.352 [2024-02-13 08:17:41.963928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:41.963946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:41.972566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:41.972583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:41.981273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:41.981291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:41.989500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:41.989518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:41.998205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:41.998226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:42.006332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:42.006349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:42.014486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:42.014504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:42.023692] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:42.023710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.353 [2024-02-13 08:17:42.032566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.353 [2024-02-13 08:17:42.032584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.041064] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.041082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.049195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.049212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.056654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.056671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.066212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.066230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.074776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.074794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.083521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.083539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.092227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.092245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.101019] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.101038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.109465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.109484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.118274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.118291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.127125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.127143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.133641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.133665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.144221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.144240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.153038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.153057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.161728] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.161752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.170294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.170314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.178552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.178571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.187888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.187907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.196078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.196096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.205244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.205263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.213622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.213640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.221955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.221974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.231318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.231337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.239396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.239414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.248730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.248747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.257297] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.257314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.265738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.265756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.274392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.274410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.283276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.283294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.613 [2024-02-13 08:17:42.292045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.613 [2024-02-13 08:17:42.292063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.300835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.300854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.309743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.309761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.318433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.318451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.327121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.327138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.336230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.336248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.344580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.344597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.353476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.353495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.362357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.362375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.371276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.371294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.380183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.380201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.388469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.388487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.397712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.397730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.406892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.406910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.415199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.415218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.873 [2024-02-13 08:17:42.424149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.873 [2024-02-13 08:17:42.424167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.433188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.433207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.442307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.442326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.451709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.451728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.459738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.459757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.468373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.468391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.477252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.477270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.485700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.485718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.494840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.494859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.503567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.503585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.512911] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.512930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.521149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.521166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.529892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.529910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.538486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.538503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.547524] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.547541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.874 [2024-02-13 08:17:42.556234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.874 [2024-02-13 08:17:42.556251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.565018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.565036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.573852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.573871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.582873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.582891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.591569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.591586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.600682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.600700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.610190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.610208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.618460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.618478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.627190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.627208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.640773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.640791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.649292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.649310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.658375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.658393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.667847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.667865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.676333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.676351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.684988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.685006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.693542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.693560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.702341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.702359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.711106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.711125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.719867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.719885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.728293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.728311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.737152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.737169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.745747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.745764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.755019] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.755036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.764038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.764059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.778008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.778026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.786158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.786176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.795022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.795040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.803937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.803955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.134 [2024-02-13 08:17:42.812318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.134 [2024-02-13 08:17:42.812335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.826336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.826355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.834867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.834885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.843205] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.843223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.852186] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.852204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.861208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.861226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.870381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.870398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.878658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.878675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.887296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.887313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.896022] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.896041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.904518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.904537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.921120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.921139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.927616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.927633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.938386] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.938403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.946702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.946721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.955847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.955876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.964450] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.964467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.972535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.972553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.981150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.981168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.989336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.393 [2024-02-13 08:17:42.989354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.393 [2024-02-13 08:17:42.997902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:42.997921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.011977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.012000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.018700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.018717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.028957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.028975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.038199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.038216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.047012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.047030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.055565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.055582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.063241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.063258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.394 [2024-02-13 08:17:43.070716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.394 [2024-02-13 08:17:43.070734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.653 [2024-02-13 08:17:43.080776] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.080794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.089180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.089198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.103231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.103249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.111658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.111676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.119948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.119966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.128579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.128598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.136614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.136633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.150477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.150496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.159330] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.159347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.167376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.167395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.175537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.175555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.184184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.184205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.200504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.200522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.209962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.209980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.218465] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.218482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.227006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.227024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.235872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.235890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.244248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.244266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.251762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.251780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.262322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.262340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.270614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.270635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.278916] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.278935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.287616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.287634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.296419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.296437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.305160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.305178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.313539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.313557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.322574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.322592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.331477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.331495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.654 [2024-02-13 08:17:43.340057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.654 [2024-02-13 08:17:43.340075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.913 [2024-02-13 08:17:43.348703] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.913 [2024-02-13 08:17:43.348720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.913 [2024-02-13 08:17:43.357024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.913 [2024-02-13 08:17:43.357045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.913 [2024-02-13 08:17:43.364737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.913 [2024-02-13 08:17:43.364754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.913 [2024-02-13 08:17:43.379621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.913 [2024-02-13 08:17:43.379640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.390207] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.390225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.398266] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.398284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.407551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.407569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.416364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.416382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.430338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.430357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.437512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.437529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.447081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.447099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.453555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.453573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.464077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.464096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.478255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.478274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.486734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.486752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.495013] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.495032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.502623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.502642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.511555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.511574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.525378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.525397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.534084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.534104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.542466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.542492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.551090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.551109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.560110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.560129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.569054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.569073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.578009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.578028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.586800] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.586818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.914 [2024-02-13 08:17:43.596142] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.914 [2024-02-13 08:17:43.596161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.604397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.604416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.625975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.625995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.633993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.634010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.642777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.642795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.651039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.651057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.664545] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.664563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.671673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.173 [2024-02-13 08:17:43.671691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.173 [2024-02-13 08:17:43.681174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.681192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.689723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.689741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.698889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.698907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.707775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.707794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.716893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.716911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.725963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.725981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.734874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.734892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.744068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.744087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.757419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.757439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.765618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.765636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.774292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.774311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.783385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.783403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.792077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.792096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.805978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.805997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.814499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.814519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.822859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.822878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.831736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.831755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.839955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.839973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.849147] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.849165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.174 [2024-02-13 08:17:43.857302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.174 [2024-02-13 08:17:43.857320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.866166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.866184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.875148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.875166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.884603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.884621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.898850] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.898869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.907060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.907079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.915260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.915278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.924530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.924549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.933002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.933020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.942228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.942246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.950560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.950578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.959468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.959486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.433 [2024-02-13 08:17:43.968413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.433 [2024-02-13 08:17:43.968431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:43.976598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:43.976615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:43.985424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:43.985441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:43.994815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:43.994833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.004028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.004046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.012978] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.012996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.021556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.021574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.030265] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.030283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.039101] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.039120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.047564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.047581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.056246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.056263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.065113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.065131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.074316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.074334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.083007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.083024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.091857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.091876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.100652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.100670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.110084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.110103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.434 [2024-02-13 08:17:44.118767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.434 [2024-02-13 08:17:44.118784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.127859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.127878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.136768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.136786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.145711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.145729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.154833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.154852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.168395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.168413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.176837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.176855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.185226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.185243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.193872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.193891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.202921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.202939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.212163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.212182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.220779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.220797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.228943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.228961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.237794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.237812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.246478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.246497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.260731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.260750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.268054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.268072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.277606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.277624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.286107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.286125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.295302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.295320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.309275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.309293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.317544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.317562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.326320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.326338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.335608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.335626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.693 [2024-02-13 08:17:44.344856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.693 [2024-02-13 08:17:44.344874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.694 [2024-02-13 08:17:44.353928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.694 [2024-02-13 08:17:44.353945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.694 [2024-02-13 08:17:44.362300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.694 [2024-02-13 08:17:44.362318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.694 [2024-02-13 08:17:44.371179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.694 [2024-02-13 08:17:44.371197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.694 [2024-02-13 08:17:44.379999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.694 [2024-02-13 08:17:44.380017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.389275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.389293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.402834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.402852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.411405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.411423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.420219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.420241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.428596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.428614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.437536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.437554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.451225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.451243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.459877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.459895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.466596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.466613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.476360] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.476378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.485098] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.485117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.493808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.493826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.502693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.502711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.510627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.510652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.519521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.519539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.528221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.528239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.537348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.537366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.545602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.545621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.554323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.554340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.562892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.562911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.572005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.572023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.586079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.586099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.594609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.594632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.602948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.602966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.611803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.611821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.619988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.620005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.953 [2024-02-13 08:17:44.633805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.953 [2024-02-13 08:17:44.633824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.642446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.642464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.650964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.650983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.659813] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.659832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.667949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.667967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.676896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.676914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.685605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.685623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.694155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.694173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.703383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.703402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.712559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.712577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.726084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.726102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.732847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.732865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.743470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.743487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.751681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.751698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.760113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.760131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.773946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.773968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.784061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.784078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.796677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.796694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.807769] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.807787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.816316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.816334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.831914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.831932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.841154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.841173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.849673] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.849691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.858619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.858636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.867208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.867225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.880955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.880974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.887618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.887635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.212 [2024-02-13 08:17:44.897370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.212 [2024-02-13 08:17:44.897388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.906178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.906196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.914855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.914872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.927084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.927103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.935848] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.935866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.944734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.944753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.953562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.953581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.962496] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.962519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.976294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.976314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.984764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.984782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:44.993313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:44.993333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.001458] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.001477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.010015] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.010034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.023549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.023568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.032106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.032125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.040440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.040458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.049312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.049330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.057958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.057976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.066990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.067008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.076231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.076249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.082524] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.082541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 00:17:11.472 Latency(us) 00:17:11.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.472 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:11.472 Nvme1n1 : 5.01 17424.59 136.13 0.00 0.00 7340.57 2075.31 24966.10 00:17:11.472 =================================================================================================================== 00:17:11.472 Total : 17424.59 136.13 0.00 0.00 7340.57 2075.31 24966.10 00:17:11.472 [2024-02-13 08:17:45.084156] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:17:11.472 [2024-02-13 08:17:45.090539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.090555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.098560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.098574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.110602] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.110620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.122636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.122662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.130652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.130666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.138676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.138687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.146693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.146705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.472 [2024-02-13 08:17:45.158726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.472 [2024-02-13 08:17:45.158739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.166747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.166760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.174771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.174783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.182812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.182828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.190814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.190826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.202845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.202855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.210875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.210885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.218887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.218896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.226919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.226930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.234930] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.234940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.246964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.246974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.254983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.254992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.263003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.263013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.271026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.271039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.279047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.279058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 [2024-02-13 08:17:45.291084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.732 [2024-02-13 08:17:45.291099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2254574) - No such process 00:17:11.732 08:17:45 -- target/zcopy.sh@49 -- # wait 2254574 00:17:11.732 08:17:45 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.732 08:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.732 08:17:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.732 08:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.732 08:17:45 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:11.732 08:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.732 08:17:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.732 delay0 00:17:11.732 08:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.732 08:17:45 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:11.732 08:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.732 08:17:45 -- common/autotest_common.sh@10 -- # set +x 00:17:11.732 08:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.732 08:17:45 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:11.732 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.991 [2024-02-13 08:17:45.466828] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:18.562 Initializing NVMe Controllers 00:17:18.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:18.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:18.562 Initialization complete. Launching workers. 00:17:18.562 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:17:18.562 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 351, failed to submit 34 00:17:18.562 success 131, unsuccess 220, failed 0 00:17:18.562 08:17:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:18.562 08:17:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:18.562 08:17:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:18.562 08:17:51 -- nvmf/common.sh@116 -- # sync 00:17:18.562 08:17:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:18.562 08:17:51 -- nvmf/common.sh@119 -- # set +e 00:17:18.562 08:17:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:18.562 08:17:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:18.562 rmmod nvme_tcp 00:17:18.562 rmmod nvme_fabrics 00:17:18.562 rmmod nvme_keyring 00:17:18.562 08:17:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:18.562 08:17:51 -- nvmf/common.sh@123 -- # set -e 00:17:18.562 08:17:51 -- nvmf/common.sh@124 -- # return 0 00:17:18.562 08:17:51 -- nvmf/common.sh@477 -- # '[' -n 2252563 ']' 00:17:18.562 08:17:51 -- nvmf/common.sh@478 -- # killprocess 2252563 00:17:18.562 08:17:51 -- common/autotest_common.sh@924 -- # '[' -z 2252563 ']' 00:17:18.562 08:17:51 -- common/autotest_common.sh@928 -- # kill -0 2252563 00:17:18.562 08:17:51 -- common/autotest_common.sh@929 -- # uname 00:17:18.562 08:17:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:18.562 08:17:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2252563 00:17:18.562 08:17:51 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:17:18.562 08:17:51 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:17:18.563 08:17:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2252563' 00:17:18.563 killing process with pid 2252563 00:17:18.563 08:17:51 -- common/autotest_common.sh@943 -- # kill 2252563 00:17:18.563 08:17:51 -- common/autotest_common.sh@948 -- # wait 2252563 00:17:18.563 08:17:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:18.563 08:17:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:18.563 08:17:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:18.563 08:17:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.563 08:17:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:18.563 08:17:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.563 08:17:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.563 08:17:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.470 08:17:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:20.470 00:17:20.470 real 0m32.151s 00:17:20.470 user 0m43.253s 00:17:20.470 sys 0m11.158s 00:17:20.470 08:17:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:20.470 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.470 ************************************ 00:17:20.470 END TEST nvmf_zcopy 00:17:20.470 ************************************ 00:17:20.470 08:17:54 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:20.470 08:17:54 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:20.470 08:17:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:20.470 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:20.470 ************************************ 00:17:20.470 START TEST nvmf_nmic 00:17:20.470 ************************************ 00:17:20.470 08:17:54 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:20.729 * Looking for test storage... 00:17:20.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.729 08:17:54 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.729 08:17:54 -- nvmf/common.sh@7 -- # uname -s 00:17:20.729 08:17:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.729 08:17:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.729 08:17:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.729 08:17:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.729 08:17:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.729 08:17:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.729 08:17:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.729 08:17:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.729 08:17:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.729 08:17:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.729 08:17:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:20.729 08:17:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:20.729 08:17:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.729 08:17:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.729 08:17:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.729 08:17:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.729 08:17:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.729 08:17:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.729 08:17:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.729 08:17:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.729 08:17:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.729 08:17:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.729 08:17:54 -- paths/export.sh@5 -- # export PATH 00:17:20.729 08:17:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.729 08:17:54 -- nvmf/common.sh@46 -- # : 0 00:17:20.729 08:17:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:20.729 08:17:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:20.729 08:17:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:20.729 08:17:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.729 08:17:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.729 08:17:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:20.729 08:17:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:20.729 08:17:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:20.729 08:17:54 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.729 08:17:54 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.729 08:17:54 -- target/nmic.sh@14 -- # nvmftestinit 00:17:20.729 08:17:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:20.729 08:17:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.729 08:17:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:20.729 08:17:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:20.729 08:17:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:20.729 08:17:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.729 08:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.729 08:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.730 08:17:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:20.730 08:17:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:20.730 08:17:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:20.730 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:27.299 08:17:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:27.299 08:17:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:27.299 08:17:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:27.299 08:17:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:27.299 08:17:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:27.299 08:17:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:27.299 08:17:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:27.299 08:17:59 -- nvmf/common.sh@294 -- # net_devs=() 00:17:27.299 08:17:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:27.299 08:17:59 -- nvmf/common.sh@295 -- # e810=() 00:17:27.299 08:17:59 -- nvmf/common.sh@295 -- # local -ga e810 00:17:27.299 08:17:59 -- nvmf/common.sh@296 -- # x722=() 00:17:27.299 08:17:59 -- nvmf/common.sh@296 -- # local -ga x722 00:17:27.299 08:17:59 -- nvmf/common.sh@297 -- # mlx=() 00:17:27.299 08:17:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:27.299 08:17:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.299 08:17:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:27.299 08:17:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:27.299 08:17:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:27.299 08:17:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:27.299 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:27.299 08:17:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:27.299 08:17:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:27.299 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:27.299 08:17:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:27.299 08:17:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.299 08:17:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.299 08:17:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:27.299 Found net devices under 0000:af:00.0: cvl_0_0 00:17:27.299 08:17:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.299 08:17:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:27.299 08:17:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.299 08:17:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.299 08:17:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:27.299 Found net devices under 0000:af:00.1: cvl_0_1 00:17:27.299 08:17:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.299 08:17:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:27.299 08:17:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:27.299 08:17:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:27.299 08:17:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.299 08:17:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.299 08:17:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.299 08:17:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:27.299 08:17:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.299 08:17:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.299 08:17:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:27.299 08:17:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.299 08:17:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.299 08:17:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:27.299 08:17:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:27.299 08:18:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.299 08:18:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.299 08:18:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.299 08:18:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.300 08:18:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:27.300 08:18:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.300 08:18:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.300 08:18:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.300 08:18:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:27.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:17:27.300 00:17:27.300 --- 10.0.0.2 ping statistics --- 00:17:27.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.300 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:27.300 08:18:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:17:27.300 00:17:27.300 --- 10.0.0.1 ping statistics --- 00:17:27.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.300 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:27.300 08:18:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.300 08:18:00 -- nvmf/common.sh@410 -- # return 0 00:17:27.300 08:18:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:27.300 08:18:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.300 08:18:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:27.300 08:18:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:27.300 08:18:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.300 08:18:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:27.300 08:18:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:27.300 08:18:00 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:27.300 08:18:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:27.300 08:18:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:27.300 08:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:27.300 08:18:00 -- nvmf/common.sh@469 -- # nvmfpid=2260332 00:17:27.300 08:18:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.300 08:18:00 -- nvmf/common.sh@470 -- # waitforlisten 2260332 00:17:27.300 08:18:00 -- common/autotest_common.sh@817 -- # '[' -z 2260332 ']' 00:17:27.300 08:18:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.300 08:18:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.300 08:18:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.300 08:18:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.300 08:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:27.300 [2024-02-13 08:18:00.329535] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:27.300 [2024-02-13 08:18:00.329579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.300 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.300 [2024-02-13 08:18:00.395054] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.300 [2024-02-13 08:18:00.469432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.300 [2024-02-13 08:18:00.469544] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.300 [2024-02-13 08:18:00.469552] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.300 [2024-02-13 08:18:00.469559] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.300 [2024-02-13 08:18:00.469616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.300 [2024-02-13 08:18:00.469719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.300 [2024-02-13 08:18:00.469743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.300 [2024-02-13 08:18:00.469744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.559 08:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.559 08:18:01 -- common/autotest_common.sh@850 -- # return 0 00:17:27.559 08:18:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:27.559 08:18:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:27.559 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.559 08:18:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.559 08:18:01 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.559 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.559 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.559 [2024-02-13 08:18:01.178928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.559 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.559 08:18:01 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.559 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.559 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.559 Malloc0 00:17:27.560 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.560 08:18:01 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:27.560 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.560 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.560 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.560 08:18:01 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:27.560 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.560 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.560 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.560 08:18:01 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.560 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.560 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.560 [2024-02-13 08:18:01.230292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.560 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.560 08:18:01 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:27.560 test case1: single bdev can't be used in multiple subsystems 00:17:27.560 08:18:01 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:27.560 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.560 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.560 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.560 08:18:01 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:27.560 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.560 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.819 08:18:01 -- target/nmic.sh@28 -- # nmic_status=0 00:17:27.819 08:18:01 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:27.819 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.819 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 [2024-02-13 08:18:01.262242] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:27.819 [2024-02-13 08:18:01.262261] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:27.819 [2024-02-13 08:18:01.262269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.819 request: 00:17:27.819 { 00:17:27.819 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:27.819 "namespace": { 00:17:27.819 "bdev_name": "Malloc0" 00:17:27.819 }, 00:17:27.819 "method": "nvmf_subsystem_add_ns", 00:17:27.819 "req_id": 1 00:17:27.819 } 00:17:27.819 Got JSON-RPC error response 00:17:27.819 response: 00:17:27.819 { 00:17:27.819 "code": -32602, 00:17:27.819 "message": "Invalid parameters" 00:17:27.819 } 00:17:27.819 08:18:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:27.819 08:18:01 -- target/nmic.sh@29 -- # nmic_status=1 00:17:27.819 08:18:01 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:27.819 08:18:01 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:27.819 Adding namespace failed - expected result. 00:17:27.819 08:18:01 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:27.819 test case2: host connect to nvmf target in multiple paths 00:17:27.819 08:18:01 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:27.819 08:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.819 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 [2024-02-13 08:18:01.274356] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:27.819 08:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.819 08:18:01 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.756 08:18:02 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:30.198 08:18:03 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.198 08:18:03 -- common/autotest_common.sh@1175 -- # local i=0 00:17:30.198 08:18:03 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.198 08:18:03 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:17:30.198 08:18:03 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:32.109 08:18:05 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:32.109 08:18:05 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:32.109 08:18:05 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:17:32.109 08:18:05 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:17:32.109 08:18:05 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:32.109 08:18:05 -- common/autotest_common.sh@1185 -- # return 0 00:17:32.109 08:18:05 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:32.109 [global] 00:17:32.109 thread=1 00:17:32.109 invalidate=1 00:17:32.109 rw=write 00:17:32.109 time_based=1 00:17:32.109 runtime=1 00:17:32.109 ioengine=libaio 00:17:32.109 direct=1 00:17:32.109 bs=4096 00:17:32.109 iodepth=1 00:17:32.109 norandommap=0 00:17:32.109 numjobs=1 00:17:32.109 00:17:32.109 verify_dump=1 00:17:32.109 verify_backlog=512 00:17:32.109 verify_state_save=0 00:17:32.109 do_verify=1 00:17:32.109 verify=crc32c-intel 00:17:32.109 [job0] 00:17:32.109 filename=/dev/nvme0n1 00:17:32.109 Could not set queue depth (nvme0n1) 00:17:32.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:32.366 fio-3.35 00:17:32.366 Starting 1 thread 00:17:33.736 00:17:33.736 job0: (groupid=0, jobs=1): err= 0: pid=2261493: Tue Feb 13 08:18:06 2024 00:17:33.736 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:17:33.736 slat (nsec): min=10315, max=23805, avg=22183.91, stdev=2666.46 00:17:33.736 clat (usec): min=40870, max=41986, avg=41266.87, stdev=449.75 00:17:33.736 lat (usec): min=40893, max=42009, avg=41289.06, stdev=449.26 00:17:33.736 clat percentiles (usec): 00:17:33.736 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:33.736 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:33.736 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:33.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:33.736 | 99.99th=[42206] 00:17:33.736 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:17:33.736 slat (nsec): min=9044, max=37210, avg=10446.27, stdev=2194.40 00:17:33.736 clat (usec): min=182, max=736, avg=243.63, stdev=59.79 00:17:33.736 lat (usec): min=192, max=772, avg=254.08, stdev=60.43 00:17:33.736 clat percentiles (usec): 00:17:33.736 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:17:33.736 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 237], 00:17:33.736 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 343], 95.00th=[ 351], 00:17:33.736 | 99.00th=[ 367], 99.50th=[ 478], 99.90th=[ 734], 99.95th=[ 734], 00:17:33.736 | 99.99th=[ 734] 00:17:33.736 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.736 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.736 lat (usec) : 250=64.79%, 500=30.71%, 750=0.37% 00:17:33.736 lat (msec) : 50=4.12% 00:17:33.736 cpu : usr=0.58%, sys=0.19%, ctx=534, majf=0, minf=2 00:17:33.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.736 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.736 00:17:33.736 Run status group 0 (all jobs): 00:17:33.736 READ: bw=84.6KiB/s (86.6kB/s), 84.6KiB/s-84.6KiB/s (86.6kB/s-86.6kB/s), io=88.0KiB (90.1kB), run=1040-1040msec 00:17:33.736 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:17:33.736 00:17:33.736 Disk stats (read/write): 00:17:33.736 nvme0n1: ios=68/512, merge=0/0, ticks=814/117, in_queue=931, util=92.99% 00:17:33.736 08:18:07 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:33.736 08:18:07 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:33.736 08:18:07 -- common/autotest_common.sh@1196 -- # local i=0 00:17:33.736 08:18:07 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:17:33.736 08:18:07 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.736 08:18:07 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:33.736 08:18:07 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.736 08:18:07 -- common/autotest_common.sh@1208 -- # return 0 00:17:33.736 08:18:07 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:33.736 08:18:07 -- target/nmic.sh@53 -- # nvmftestfini 00:17:33.736 08:18:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.736 08:18:07 -- nvmf/common.sh@116 -- # sync 00:17:33.736 08:18:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.736 08:18:07 -- nvmf/common.sh@119 -- # set +e 00:17:33.736 08:18:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.736 08:18:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.736 rmmod nvme_tcp 00:17:33.736 rmmod nvme_fabrics 00:17:33.736 rmmod nvme_keyring 00:17:33.736 08:18:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.736 08:18:07 -- nvmf/common.sh@123 -- # set -e 00:17:33.736 08:18:07 -- nvmf/common.sh@124 -- # return 0 00:17:33.736 08:18:07 -- nvmf/common.sh@477 -- # '[' -n 2260332 ']' 00:17:33.736 08:18:07 -- nvmf/common.sh@478 -- # killprocess 2260332 00:17:33.736 08:18:07 -- common/autotest_common.sh@924 -- # '[' -z 2260332 ']' 00:17:33.736 08:18:07 -- common/autotest_common.sh@928 -- # kill -0 2260332 00:17:33.736 08:18:07 -- common/autotest_common.sh@929 -- # uname 00:17:33.736 08:18:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:33.736 08:18:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2260332 00:17:33.736 08:18:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:33.736 08:18:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:33.736 08:18:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2260332' 00:17:33.736 killing process with pid 2260332 00:17:33.736 08:18:07 -- common/autotest_common.sh@943 -- # kill 2260332 00:17:33.736 08:18:07 -- common/autotest_common.sh@948 -- # wait 2260332 00:17:33.995 08:18:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.995 08:18:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.995 08:18:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.995 08:18:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.995 08:18:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.995 08:18:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.995 08:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.995 08:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.525 08:18:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:36.525 00:17:36.525 real 0m15.514s 00:17:36.525 user 0m35.440s 00:17:36.525 sys 0m5.186s 00:17:36.525 08:18:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.525 08:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 ************************************ 00:17:36.525 END TEST nvmf_nmic 00:17:36.525 ************************************ 00:17:36.525 08:18:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:36.525 08:18:09 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:17:36.525 08:18:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:36.525 08:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 ************************************ 00:17:36.525 START TEST nvmf_fio_target 00:17:36.525 ************************************ 00:17:36.525 08:18:09 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:36.525 * Looking for test storage... 00:17:36.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.525 08:18:09 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.525 08:18:09 -- nvmf/common.sh@7 -- # uname -s 00:17:36.525 08:18:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.525 08:18:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.525 08:18:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.525 08:18:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.525 08:18:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.525 08:18:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.525 08:18:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.525 08:18:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.525 08:18:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.525 08:18:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.525 08:18:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:36.525 08:18:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:36.525 08:18:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.525 08:18:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.525 08:18:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.525 08:18:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.525 08:18:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.525 08:18:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.525 08:18:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.525 08:18:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.525 08:18:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.525 08:18:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.525 08:18:09 -- paths/export.sh@5 -- # export PATH 00:17:36.525 08:18:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.525 08:18:09 -- nvmf/common.sh@46 -- # : 0 00:17:36.525 08:18:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.525 08:18:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.525 08:18:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.525 08:18:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.525 08:18:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.525 08:18:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.525 08:18:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.525 08:18:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.525 08:18:09 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.525 08:18:09 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.525 08:18:09 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.525 08:18:09 -- target/fio.sh@16 -- # nvmftestinit 00:17:36.525 08:18:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.525 08:18:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.525 08:18:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.525 08:18:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.525 08:18:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.525 08:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.525 08:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.525 08:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.525 08:18:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:36.525 08:18:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:36.525 08:18:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:36.525 08:18:09 -- common/autotest_common.sh@10 -- # set +x 00:17:43.089 08:18:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:43.089 08:18:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:43.089 08:18:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:43.089 08:18:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:43.089 08:18:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:43.089 08:18:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:43.089 08:18:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:43.089 08:18:15 -- nvmf/common.sh@294 -- # net_devs=() 00:17:43.089 08:18:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:43.089 08:18:15 -- nvmf/common.sh@295 -- # e810=() 00:17:43.089 08:18:15 -- nvmf/common.sh@295 -- # local -ga e810 00:17:43.089 08:18:15 -- nvmf/common.sh@296 -- # x722=() 00:17:43.089 08:18:15 -- nvmf/common.sh@296 -- # local -ga x722 00:17:43.089 08:18:15 -- nvmf/common.sh@297 -- # mlx=() 00:17:43.089 08:18:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:43.089 08:18:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.089 08:18:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.089 08:18:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:43.089 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:43.089 08:18:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.089 08:18:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:43.089 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:43.089 08:18:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.089 08:18:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.089 08:18:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.089 08:18:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:43.089 Found net devices under 0000:af:00.0: cvl_0_0 00:17:43.089 08:18:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.089 08:18:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.089 08:18:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.089 08:18:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:43.089 Found net devices under 0000:af:00.1: cvl_0_1 00:17:43.089 08:18:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:43.089 08:18:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:43.089 08:18:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.089 08:18:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.089 08:18:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:43.089 08:18:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.089 08:18:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.089 08:18:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:43.089 08:18:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.089 08:18:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.089 08:18:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:43.089 08:18:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:43.089 08:18:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.089 08:18:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.089 08:18:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.089 08:18:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.089 08:18:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:43.089 08:18:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.089 08:18:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.089 08:18:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.089 08:18:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:43.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:17:43.089 00:17:43.089 --- 10.0.0.2 ping statistics --- 00:17:43.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.089 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:43.089 08:18:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:17:43.089 00:17:43.089 --- 10.0.0.1 ping statistics --- 00:17:43.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.089 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:43.089 08:18:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.089 08:18:15 -- nvmf/common.sh@410 -- # return 0 00:17:43.089 08:18:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.089 08:18:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.089 08:18:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.089 08:18:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.089 08:18:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.089 08:18:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.089 08:18:15 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:43.089 08:18:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.089 08:18:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:43.089 08:18:15 -- common/autotest_common.sh@10 -- # set +x 00:17:43.089 08:18:15 -- nvmf/common.sh@469 -- # nvmfpid=2265961 00:17:43.089 08:18:15 -- nvmf/common.sh@470 -- # waitforlisten 2265961 00:17:43.089 08:18:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.090 08:18:15 -- common/autotest_common.sh@817 -- # '[' -z 2265961 ']' 00:17:43.090 08:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.090 08:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:43.090 08:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.090 08:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:43.090 08:18:15 -- common/autotest_common.sh@10 -- # set +x 00:17:43.090 [2024-02-13 08:18:15.855342] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:43.090 [2024-02-13 08:18:15.855384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.090 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.090 [2024-02-13 08:18:15.920349] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.090 [2024-02-13 08:18:15.995305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.090 [2024-02-13 08:18:15.995419] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.090 [2024-02-13 08:18:15.995426] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.090 [2024-02-13 08:18:15.995433] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.090 [2024-02-13 08:18:15.995475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.090 [2024-02-13 08:18:15.995571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.090 [2024-02-13 08:18:15.995639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.090 [2024-02-13 08:18:15.995640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.090 08:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.090 08:18:16 -- common/autotest_common.sh@850 -- # return 0 00:17:43.090 08:18:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.090 08:18:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.090 08:18:16 -- common/autotest_common.sh@10 -- # set +x 00:17:43.090 08:18:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.090 08:18:16 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:43.348 [2024-02-13 08:18:16.837328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.349 08:18:16 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:43.607 08:18:17 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:43.607 08:18:17 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:43.607 08:18:17 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:43.607 08:18:17 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:43.865 08:18:17 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:43.865 08:18:17 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.123 08:18:17 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:44.123 08:18:17 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:44.381 08:18:17 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.381 08:18:17 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:44.381 08:18:17 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.638 08:18:18 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:44.638 08:18:18 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.896 08:18:18 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:44.896 08:18:18 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:44.896 08:18:18 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:45.153 08:18:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:45.153 08:18:18 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.410 08:18:18 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:45.410 08:18:18 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.410 08:18:19 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.667 [2024-02-13 08:18:19.229688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.667 08:18:19 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:45.925 08:18:19 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:45.925 08:18:19 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:47.295 08:18:20 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:47.295 08:18:20 -- common/autotest_common.sh@1175 -- # local i=0 00:17:47.295 08:18:20 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.295 08:18:20 -- common/autotest_common.sh@1177 -- # [[ -n 4 ]] 00:17:47.295 08:18:20 -- common/autotest_common.sh@1178 -- # nvme_device_counter=4 00:17:47.295 08:18:20 -- common/autotest_common.sh@1182 -- # sleep 2 00:17:49.190 08:18:22 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:17:49.190 08:18:22 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:17:49.190 08:18:22 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.190 08:18:22 -- common/autotest_common.sh@1184 -- # nvme_devices=4 00:17:49.191 08:18:22 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.191 08:18:22 -- common/autotest_common.sh@1185 -- # return 0 00:17:49.191 08:18:22 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:49.191 [global] 00:17:49.191 thread=1 00:17:49.191 invalidate=1 00:17:49.191 rw=write 00:17:49.191 time_based=1 00:17:49.191 runtime=1 00:17:49.191 ioengine=libaio 00:17:49.191 direct=1 00:17:49.191 bs=4096 00:17:49.191 iodepth=1 00:17:49.191 norandommap=0 00:17:49.191 numjobs=1 00:17:49.191 00:17:49.191 verify_dump=1 00:17:49.191 verify_backlog=512 00:17:49.191 verify_state_save=0 00:17:49.191 do_verify=1 00:17:49.191 verify=crc32c-intel 00:17:49.191 [job0] 00:17:49.191 filename=/dev/nvme0n1 00:17:49.191 [job1] 00:17:49.191 filename=/dev/nvme0n2 00:17:49.191 [job2] 00:17:49.191 filename=/dev/nvme0n3 00:17:49.191 [job3] 00:17:49.191 filename=/dev/nvme0n4 00:17:49.191 Could not set queue depth (nvme0n1) 00:17:49.191 Could not set queue depth (nvme0n2) 00:17:49.191 Could not set queue depth (nvme0n3) 00:17:49.191 Could not set queue depth (nvme0n4) 00:17:49.448 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:49.448 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:49.448 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:49.448 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:49.448 fio-3.35 00:17:49.448 Starting 4 threads 00:17:50.818 00:17:50.818 job0: (groupid=0, jobs=1): err= 0: pid=2267446: Tue Feb 13 08:18:24 2024 00:17:50.818 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:17:50.818 slat (nsec): min=9904, max=23677, avg=22623.45, stdev=2853.21 00:17:50.818 clat (usec): min=40841, max=41993, avg=41465.73, stdev=509.07 00:17:50.818 lat (usec): min=40851, max=42016, avg=41488.35, stdev=509.75 00:17:50.818 clat percentiles (usec): 00:17:50.818 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:50.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:17:50.818 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:50.818 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:50.818 | 99.99th=[42206] 00:17:50.818 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:17:50.818 slat (nsec): min=9726, max=35894, avg=11278.15, stdev=2162.31 00:17:50.818 clat (usec): min=192, max=684, avg=233.73, stdev=46.23 00:17:50.818 lat (usec): min=202, max=696, avg=245.01, stdev=47.43 00:17:50.818 clat percentiles (usec): 00:17:50.818 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:17:50.818 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:17:50.818 | 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 297], 95.00th=[ 347], 00:17:50.818 | 99.00th=[ 359], 99.50th=[ 408], 99.90th=[ 685], 99.95th=[ 685], 00:17:50.818 | 99.99th=[ 685] 00:17:50.818 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:50.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:50.818 lat (usec) : 250=77.72%, 500=17.98%, 750=0.19% 00:17:50.818 lat (msec) : 50=4.12% 00:17:50.818 cpu : usr=0.58%, sys=0.29%, ctx=535, majf=0, minf=1 00:17:50.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.818 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.818 job1: (groupid=0, jobs=1): err= 0: pid=2267465: Tue Feb 13 08:18:24 2024 00:17:50.819 read: IOPS=1031, BW=4128KiB/s (4227kB/s)(4132KiB/1001msec) 00:17:50.819 slat (nsec): min=6386, max=39357, avg=12559.72, stdev=7179.10 00:17:50.819 clat (usec): min=352, max=1375, avg=566.97, stdev=116.56 00:17:50.819 lat (usec): min=359, max=1396, avg=579.53, stdev=121.12 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 375], 5.00th=[ 416], 10.00th=[ 445], 20.00th=[ 486], 00:17:50.819 | 30.00th=[ 498], 40.00th=[ 506], 50.00th=[ 523], 60.00th=[ 603], 00:17:50.819 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 717], 00:17:50.819 | 99.00th=[ 758], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1369], 00:17:50.819 | 99.99th=[ 1369] 00:17:50.819 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:50.819 slat (nsec): min=8690, max=35320, avg=10226.26, stdev=1824.20 00:17:50.819 clat (usec): min=177, max=717, avg=246.02, stdev=72.15 00:17:50.819 lat (usec): min=187, max=728, avg=256.25, stdev=72.59 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 200], 00:17:50.819 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 239], 00:17:50.819 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 351], 00:17:50.819 | 99.00th=[ 652], 99.50th=[ 701], 99.90th=[ 717], 99.95th=[ 717], 00:17:50.819 | 99.99th=[ 717] 00:17:50.819 bw ( KiB/s): min= 7880, max= 7880, per=50.02%, avg=7880.00, stdev= 0.00, samples=1 00:17:50.819 iops : min= 1970, max= 1970, avg=1970.00, stdev= 0.00, samples=1 00:17:50.819 lat (usec) : 250=41.69%, 500=30.60%, 750=27.25%, 1000=0.16% 00:17:50.819 lat (msec) : 2=0.31% 00:17:50.819 cpu : usr=1.50%, sys=3.20%, ctx=2571, majf=0, minf=2 00:17:50.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.819 job2: (groupid=0, jobs=1): err= 0: pid=2267479: Tue Feb 13 08:18:24 2024 00:17:50.819 read: IOPS=57, BW=229KiB/s (235kB/s)(236KiB/1029msec) 00:17:50.819 slat (nsec): min=7004, max=25945, avg=9659.92, stdev=4171.73 00:17:50.819 clat (usec): min=308, max=41978, avg=15128.89, stdev=19447.55 00:17:50.819 lat (usec): min=315, max=41989, avg=15138.55, stdev=19450.18 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 310], 5.00th=[ 375], 10.00th=[ 494], 20.00th=[ 553], 00:17:50.819 | 30.00th=[ 611], 40.00th=[ 906], 50.00th=[ 914], 60.00th=[ 1418], 00:17:50.819 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:50.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:50.819 | 99.99th=[42206] 00:17:50.819 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:17:50.819 slat (nsec): min=11522, max=57791, avg=14566.27, stdev=5064.85 00:17:50.819 clat (usec): min=193, max=482, avg=245.84, stdev=25.05 00:17:50.819 lat (usec): min=206, max=494, avg=260.40, stdev=25.32 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:17:50.819 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:17:50.819 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 273], 00:17:50.819 | 99.00th=[ 375], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 482], 00:17:50.819 | 99.99th=[ 482] 00:17:50.819 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:50.819 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:50.819 lat (usec) : 250=64.10%, 500=26.97%, 750=2.10%, 1000=1.75% 00:17:50.819 lat (msec) : 2=1.40%, 50=3.68% 00:17:50.819 cpu : usr=0.39%, sys=1.07%, ctx=572, majf=0, minf=1 00:17:50.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 issued rwts: total=59,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.819 job3: (groupid=0, jobs=1): err= 0: pid=2267481: Tue Feb 13 08:18:24 2024 00:17:50.819 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4100KiB/1020msec) 00:17:50.819 slat (nsec): min=7410, max=37484, avg=8206.41, stdev=1288.43 00:17:50.819 clat (usec): min=379, max=41753, avg=546.72, stdev=1289.11 00:17:50.819 lat (usec): min=387, max=41763, avg=554.92, stdev=1289.18 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 400], 5.00th=[ 429], 10.00th=[ 461], 20.00th=[ 482], 00:17:50.819 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 510], 00:17:50.819 | 70.00th=[ 519], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 603], 00:17:50.819 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 685], 99.95th=[41681], 00:17:50.819 | 99.99th=[41681] 00:17:50.819 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:17:50.819 slat (nsec): min=9940, max=39326, avg=12291.13, stdev=2040.92 00:17:50.819 clat (usec): min=189, max=1838, avg=276.49, stdev=85.97 00:17:50.819 lat (usec): min=200, max=1850, avg=288.79, stdev=86.00 00:17:50.819 clat percentiles (usec): 00:17:50.819 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:17:50.819 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 265], 00:17:50.819 | 70.00th=[ 281], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 404], 00:17:50.819 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 717], 99.95th=[ 1844], 00:17:50.819 | 99.99th=[ 1844] 00:17:50.819 bw ( KiB/s): min= 5840, max= 6448, per=39.00%, avg=6144.00, stdev=429.92, samples=2 00:17:50.819 iops : min= 1460, max= 1612, avg=1536.00, stdev=107.48, samples=2 00:17:50.819 lat (usec) : 250=29.44%, 500=46.58%, 750=23.90% 00:17:50.819 lat (msec) : 2=0.04%, 50=0.04% 00:17:50.819 cpu : usr=1.28%, sys=2.85%, ctx=2562, majf=0, minf=1 00:17:50.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.819 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.819 00:17:50.819 Run status group 0 (all jobs): 00:17:50.819 READ: bw=8227KiB/s (8424kB/s), 84.6KiB/s-4128KiB/s (86.6kB/s-4227kB/s), io=8556KiB (8761kB), run=1001-1040msec 00:17:50.819 WRITE: bw=15.4MiB/s (16.1MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:17:50.819 00:17:50.819 Disk stats (read/write): 00:17:50.819 nvme0n1: ios=63/512, merge=0/0, ticks=800/122, in_queue=922, util=86.67% 00:17:50.819 nvme0n2: ios=1066/1024, merge=0/0, ticks=661/231, in_queue=892, util=91.13% 00:17:50.819 nvme0n3: ios=39/512, merge=0/0, ticks=1566/118, in_queue=1684, util=93.06% 00:17:50.819 nvme0n4: ios=1073/1077, merge=0/0, ticks=668/290, in_queue=958, util=96.28% 00:17:50.819 08:18:24 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:50.819 [global] 00:17:50.819 thread=1 00:17:50.819 invalidate=1 00:17:50.819 rw=randwrite 00:17:50.819 time_based=1 00:17:50.819 runtime=1 00:17:50.819 ioengine=libaio 00:17:50.819 direct=1 00:17:50.819 bs=4096 00:17:50.819 iodepth=1 00:17:50.819 norandommap=0 00:17:50.819 numjobs=1 00:17:50.819 00:17:50.819 verify_dump=1 00:17:50.819 verify_backlog=512 00:17:50.819 verify_state_save=0 00:17:50.819 do_verify=1 00:17:50.819 verify=crc32c-intel 00:17:50.819 [job0] 00:17:50.819 filename=/dev/nvme0n1 00:17:50.819 [job1] 00:17:50.819 filename=/dev/nvme0n2 00:17:50.819 [job2] 00:17:50.819 filename=/dev/nvme0n3 00:17:50.819 [job3] 00:17:50.819 filename=/dev/nvme0n4 00:17:50.819 Could not set queue depth (nvme0n1) 00:17:50.819 Could not set queue depth (nvme0n2) 00:17:50.819 Could not set queue depth (nvme0n3) 00:17:50.819 Could not set queue depth (nvme0n4) 00:17:51.076 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.076 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.076 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.076 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.076 fio-3.35 00:17:51.076 Starting 4 threads 00:17:52.443 00:17:52.443 job0: (groupid=0, jobs=1): err= 0: pid=2267856: Tue Feb 13 08:18:25 2024 00:17:52.443 read: IOPS=1182, BW=4731KiB/s (4845kB/s)(4736KiB/1001msec) 00:17:52.443 slat (usec): min=7, max=166, avg= 9.10, stdev= 4.88 00:17:52.443 clat (usec): min=317, max=1083, avg=488.50, stdev=73.89 00:17:52.443 lat (usec): min=325, max=1091, avg=497.60, stdev=74.56 00:17:52.443 clat percentiles (usec): 00:17:52.443 | 1.00th=[ 343], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 465], 00:17:52.443 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 486], 00:17:52.443 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 570], 00:17:52.444 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 996], 99.95th=[ 1090], 00:17:52.444 | 99.99th=[ 1090] 00:17:52.444 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:52.444 slat (nsec): min=10681, max=47817, avg=12166.08, stdev=2058.67 00:17:52.444 clat (usec): min=200, max=643, avg=249.42, stdev=50.53 00:17:52.444 lat (usec): min=211, max=658, avg=261.58, stdev=51.08 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 217], 00:17:52.444 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:17:52.444 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 322], 95.00th=[ 355], 00:17:52.444 | 99.00th=[ 420], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 644], 00:17:52.444 | 99.99th=[ 644] 00:17:52.444 bw ( KiB/s): min= 7576, max= 7576, per=40.61%, avg=7576.00, stdev= 0.00, samples=1 00:17:52.444 iops : min= 1894, max= 1894, avg=1894.00, stdev= 0.00, samples=1 00:17:52.444 lat (usec) : 250=38.64%, 500=51.03%, 750=9.45%, 1000=0.85% 00:17:52.444 lat (msec) : 2=0.04% 00:17:52.444 cpu : usr=2.70%, sys=4.20%, ctx=2724, majf=0, minf=1 00:17:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 issued rwts: total=1184,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.444 job1: (groupid=0, jobs=1): err= 0: pid=2267857: Tue Feb 13 08:18:25 2024 00:17:52.444 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:17:52.444 slat (nsec): min=9888, max=26745, avg=17172.81, stdev=6352.88 00:17:52.444 clat (usec): min=40856, max=42284, avg=41325.79, stdev=525.19 00:17:52.444 lat (usec): min=40882, max=42295, avg=41342.96, stdev=524.14 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:52.444 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:52.444 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:52.444 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:52.444 | 99.99th=[42206] 00:17:52.444 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:52.444 slat (nsec): min=10472, max=56909, avg=12614.98, stdev=3657.51 00:17:52.444 clat (usec): min=203, max=436, avg=266.76, stdev=57.21 00:17:52.444 lat (usec): min=214, max=464, avg=279.37, stdev=57.58 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:17:52.444 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 251], 00:17:52.444 | 70.00th=[ 277], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 383], 00:17:52.444 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 437], 99.95th=[ 437], 00:17:52.444 | 99.99th=[ 437] 00:17:52.444 bw ( KiB/s): min= 4096, max= 4096, per=21.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.444 lat (usec) : 250=55.72%, 500=40.34% 00:17:52.444 lat (msec) : 50=3.94% 00:17:52.444 cpu : usr=0.59%, sys=0.79%, ctx=534, majf=0, minf=1 00:17:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.444 job2: (groupid=0, jobs=1): err= 0: pid=2267860: Tue Feb 13 08:18:25 2024 00:17:52.444 read: IOPS=1157, BW=4631KiB/s (4743kB/s)(4636KiB/1001msec) 00:17:52.444 slat (nsec): min=6518, max=27013, avg=8906.74, stdev=1819.69 00:17:52.444 clat (usec): min=290, max=1196, avg=468.21, stdev=53.91 00:17:52.444 lat (usec): min=299, max=1207, avg=477.12, stdev=54.51 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 330], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 441], 00:17:52.444 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 478], 00:17:52.444 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 529], 00:17:52.444 | 99.00th=[ 644], 99.50th=[ 725], 99.90th=[ 1139], 99.95th=[ 1205], 00:17:52.444 | 99.99th=[ 1205] 00:17:52.444 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:52.444 slat (usec): min=4, max=211, avg=12.28, stdev= 8.71 00:17:52.444 clat (usec): min=182, max=670, avg=273.54, stdev=69.23 00:17:52.444 lat (usec): min=189, max=749, avg=285.82, stdev=71.15 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:17:52.444 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 269], 00:17:52.444 | 70.00th=[ 293], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 412], 00:17:52.444 | 99.00th=[ 457], 99.50th=[ 502], 99.90th=[ 619], 99.95th=[ 668], 00:17:52.444 | 99.99th=[ 668] 00:17:52.444 bw ( KiB/s): min= 6472, max= 6472, per=34.69%, avg=6472.00, stdev= 0.00, samples=1 00:17:52.444 iops : min= 1618, max= 1618, avg=1618.00, stdev= 0.00, samples=1 00:17:52.444 lat (usec) : 250=30.50%, 500=63.04%, 750=6.31%, 1000=0.07% 00:17:52.444 lat (msec) : 2=0.07% 00:17:52.444 cpu : usr=1.70%, sys=4.40%, ctx=2697, majf=0, minf=1 00:17:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 issued rwts: total=1159,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.444 job3: (groupid=0, jobs=1): err= 0: pid=2267861: Tue Feb 13 08:18:25 2024 00:17:52.444 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:52.444 slat (nsec): min=2944, max=34939, avg=8408.89, stdev=1994.68 00:17:52.444 clat (usec): min=331, max=42189, avg=682.83, stdev=2881.68 00:17:52.444 lat (usec): min=339, max=42202, avg=691.24, stdev=2882.81 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 351], 5.00th=[ 388], 10.00th=[ 445], 20.00th=[ 461], 00:17:52.444 | 30.00th=[ 469], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 486], 00:17:52.444 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 562], 00:17:52.444 | 99.00th=[ 783], 99.50th=[ 1827], 99.90th=[42206], 99.95th=[42206], 00:17:52.444 | 99.99th=[42206] 00:17:52.444 write: IOPS=1139, BW=4559KiB/s (4669kB/s)(4564KiB/1001msec); 0 zone resets 00:17:52.444 slat (nsec): min=10192, max=39744, avg=11868.90, stdev=1939.80 00:17:52.444 clat (usec): min=164, max=1113, avg=238.18, stdev=62.72 00:17:52.444 lat (usec): min=177, max=1127, avg=250.05, stdev=63.01 00:17:52.444 clat percentiles (usec): 00:17:52.444 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:17:52.444 | 30.00th=[ 198], 40.00th=[ 210], 50.00th=[ 233], 60.00th=[ 249], 00:17:52.444 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 326], 00:17:52.444 | 99.00th=[ 461], 99.50th=[ 519], 99.90th=[ 594], 99.95th=[ 1106], 00:17:52.444 | 99.99th=[ 1106] 00:17:52.444 bw ( KiB/s): min= 4096, max= 4096, per=21.95%, avg=4096.00, stdev= 0.00, samples=1 00:17:52.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:52.444 lat (usec) : 250=32.06%, 500=59.77%, 750=7.62%, 1000=0.18% 00:17:52.444 lat (msec) : 2=0.14%, 50=0.23% 00:17:52.444 cpu : usr=1.40%, sys=2.90%, ctx=2166, majf=0, minf=2 00:17:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.444 issued rwts: total=1024,1141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.444 00:17:52.444 Run status group 0 (all jobs): 00:17:52.444 READ: bw=13.1MiB/s (13.7MB/s), 82.9KiB/s-4731KiB/s (84.9kB/s-4845kB/s), io=13.2MiB (13.9MB), run=1001-1013msec 00:17:52.444 WRITE: bw=18.2MiB/s (19.1MB/s), 2022KiB/s-6138KiB/s (2070kB/s-6285kB/s), io=18.5MiB (19.4MB), run=1001-1013msec 00:17:52.444 00:17:52.444 Disk stats (read/write): 00:17:52.444 nvme0n1: ios=1049/1265, merge=0/0, ticks=1429/304, in_queue=1733, util=93.49% 00:17:52.444 nvme0n2: ios=49/512, merge=0/0, ticks=1183/125, in_queue=1308, util=97.05% 00:17:52.444 nvme0n3: ios=1081/1173, merge=0/0, ticks=546/333, in_queue=879, util=91.00% 00:17:52.444 nvme0n4: ios=853/1024, merge=0/0, ticks=1568/230, in_queue=1798, util=100.00% 00:17:52.444 08:18:25 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:52.444 [global] 00:17:52.444 thread=1 00:17:52.444 invalidate=1 00:17:52.444 rw=write 00:17:52.444 time_based=1 00:17:52.444 runtime=1 00:17:52.444 ioengine=libaio 00:17:52.444 direct=1 00:17:52.444 bs=4096 00:17:52.444 iodepth=128 00:17:52.444 norandommap=0 00:17:52.444 numjobs=1 00:17:52.444 00:17:52.444 verify_dump=1 00:17:52.444 verify_backlog=512 00:17:52.444 verify_state_save=0 00:17:52.444 do_verify=1 00:17:52.444 verify=crc32c-intel 00:17:52.444 [job0] 00:17:52.444 filename=/dev/nvme0n1 00:17:52.444 [job1] 00:17:52.444 filename=/dev/nvme0n2 00:17:52.444 [job2] 00:17:52.444 filename=/dev/nvme0n3 00:17:52.444 [job3] 00:17:52.444 filename=/dev/nvme0n4 00:17:52.444 Could not set queue depth (nvme0n1) 00:17:52.444 Could not set queue depth (nvme0n2) 00:17:52.444 Could not set queue depth (nvme0n3) 00:17:52.444 Could not set queue depth (nvme0n4) 00:17:52.701 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:52.701 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:52.701 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:52.701 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:52.701 fio-3.35 00:17:52.701 Starting 4 threads 00:17:54.087 00:17:54.087 job0: (groupid=0, jobs=1): err= 0: pid=2268231: Tue Feb 13 08:18:27 2024 00:17:54.087 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:17:54.087 slat (nsec): min=1233, max=13984k, avg=106005.90, stdev=797072.89 00:17:54.087 clat (usec): min=972, max=48723, avg=16143.00, stdev=8452.84 00:17:54.087 lat (usec): min=980, max=49312, avg=16249.01, stdev=8500.27 00:17:54.087 clat percentiles (usec): 00:17:54.087 | 1.00th=[ 5538], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[ 9110], 00:17:54.087 | 30.00th=[10028], 40.00th=[11731], 50.00th=[13829], 60.00th=[15795], 00:17:54.087 | 70.00th=[18744], 80.00th=[21890], 90.00th=[28705], 95.00th=[33817], 00:17:54.087 | 99.00th=[40633], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:17:54.087 | 99.99th=[48497] 00:17:54.087 write: IOPS=4198, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1003msec); 0 zone resets 00:17:54.087 slat (usec): min=2, max=19814, avg=110.36, stdev=797.12 00:17:54.087 clat (usec): min=716, max=48460, avg=14396.98, stdev=7725.56 00:17:54.087 lat (usec): min=1038, max=48467, avg=14507.34, stdev=7781.81 00:17:54.087 clat percentiles (usec): 00:17:54.087 | 1.00th=[ 2024], 5.00th=[ 5080], 10.00th=[ 6128], 20.00th=[ 7832], 00:17:54.087 | 30.00th=[ 9372], 40.00th=[11469], 50.00th=[13042], 60.00th=[14353], 00:17:54.087 | 70.00th=[16581], 80.00th=[20317], 90.00th=[26870], 95.00th=[29230], 00:17:54.087 | 99.00th=[34866], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:17:54.087 | 99.99th=[48497] 00:17:54.087 bw ( KiB/s): min=15376, max=17413, per=23.26%, avg=16394.50, stdev=1440.38, samples=2 00:17:54.087 iops : min= 3844, max= 4353, avg=4098.50, stdev=359.92, samples=2 00:17:54.087 lat (usec) : 750=0.01%, 1000=0.02% 00:17:54.087 lat (msec) : 2=0.73%, 4=0.91%, 10=30.07%, 20=44.73%, 50=23.51% 00:17:54.087 cpu : usr=3.69%, sys=4.99%, ctx=401, majf=0, minf=1 00:17:54.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:54.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.087 issued rwts: total=4096,4211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.087 job1: (groupid=0, jobs=1): err= 0: pid=2268232: Tue Feb 13 08:18:27 2024 00:17:54.087 read: IOPS=4954, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1005msec) 00:17:54.087 slat (nsec): min=1042, max=12117k, avg=80548.23, stdev=605628.55 00:17:54.087 clat (usec): min=1628, max=35475, avg=12882.91, stdev=5102.99 00:17:54.087 lat (usec): min=1636, max=35504, avg=12963.46, stdev=5143.09 00:17:54.087 clat percentiles (usec): 00:17:54.087 | 1.00th=[ 4359], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 8455], 00:17:54.087 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[12256], 60.00th=[13698], 00:17:54.087 | 70.00th=[15270], 80.00th=[17433], 90.00th=[20055], 95.00th=[22676], 00:17:54.087 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28443], 99.95th=[28443], 00:17:54.087 | 99.99th=[35390] 00:17:54.087 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:54.087 slat (nsec): min=1865, max=8377.4k, avg=95424.70, stdev=550610.28 00:17:54.087 clat (usec): min=1309, max=41743, avg=12384.14, stdev=6189.40 00:17:54.087 lat (usec): min=1319, max=41752, avg=12479.56, stdev=6228.60 00:17:54.087 clat percentiles (usec): 00:17:54.087 | 1.00th=[ 3785], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 7963], 00:17:54.087 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[10945], 60.00th=[11863], 00:17:54.087 | 70.00th=[12911], 80.00th=[14746], 90.00th=[19006], 95.00th=[27657], 00:17:54.087 | 99.00th=[35914], 99.50th=[36439], 99.90th=[40633], 99.95th=[41157], 00:17:54.087 | 99.99th=[41681] 00:17:54.087 bw ( KiB/s): min=19160, max=21800, per=29.06%, avg=20480.00, stdev=1866.76, samples=2 00:17:54.087 iops : min= 4790, max= 5450, avg=5120.00, stdev=466.69, samples=2 00:17:54.087 lat (msec) : 2=0.13%, 4=0.58%, 10=36.45%, 20=53.51%, 50=9.33% 00:17:54.088 cpu : usr=2.99%, sys=5.28%, ctx=462, majf=0, minf=1 00:17:54.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:54.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.088 issued rwts: total=4979,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.088 job2: (groupid=0, jobs=1): err= 0: pid=2268233: Tue Feb 13 08:18:27 2024 00:17:54.088 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:17:54.088 slat (nsec): min=1674, max=47534k, avg=183609.94, stdev=1575448.97 00:17:54.088 clat (usec): min=8302, max=88467, avg=22605.27, stdev=16162.79 00:17:54.088 lat (usec): min=8307, max=88478, avg=22788.88, stdev=16279.49 00:17:54.088 clat percentiles (usec): 00:17:54.088 | 1.00th=[ 8455], 5.00th=[10683], 10.00th=[11207], 20.00th=[13173], 00:17:54.088 | 30.00th=[13829], 40.00th=[14746], 50.00th=[16188], 60.00th=[17957], 00:17:54.088 | 70.00th=[20579], 80.00th=[28443], 90.00th=[51643], 95.00th=[63701], 00:17:54.088 | 99.00th=[73925], 99.50th=[73925], 99.90th=[80217], 99.95th=[82314], 00:17:54.088 | 99.99th=[88605] 00:17:54.088 write: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1004msec); 0 zone resets 00:17:54.088 slat (usec): min=2, max=45324, avg=185.21, stdev=1352.29 00:17:54.088 clat (usec): min=435, max=59667, avg=22124.59, stdev=11032.80 00:17:54.088 lat (usec): min=657, max=80792, avg=22309.80, stdev=11148.51 00:17:54.088 clat percentiles (usec): 00:17:54.088 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[11076], 20.00th=[11600], 00:17:54.088 | 30.00th=[14484], 40.00th=[16581], 50.00th=[18744], 60.00th=[22938], 00:17:54.088 | 70.00th=[26870], 80.00th=[32900], 90.00th=[39584], 95.00th=[42730], 00:17:54.088 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51643], 99.95th=[54264], 00:17:54.088 | 99.99th=[59507] 00:17:54.088 bw ( KiB/s): min=10248, max=10642, per=14.82%, avg=10445.00, stdev=278.60, samples=2 00:17:54.088 iops : min= 2562, max= 2660, avg=2611.00, stdev=69.30, samples=2 00:17:54.088 lat (usec) : 500=0.02%, 750=0.06% 00:17:54.088 lat (msec) : 4=0.08%, 10=4.70%, 20=56.71%, 50=31.71%, 100=6.73% 00:17:54.088 cpu : usr=1.99%, sys=2.89%, ctx=352, majf=0, minf=1 00:17:54.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:54.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.088 issued rwts: total=2560,2742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.088 job3: (groupid=0, jobs=1): err= 0: pid=2268234: Tue Feb 13 08:18:27 2024 00:17:54.088 read: IOPS=5226, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:17:54.088 slat (nsec): min=1073, max=11680k, avg=73123.83, stdev=579336.74 00:17:54.088 clat (usec): min=1764, max=30699, avg=10970.71, stdev=3900.33 00:17:54.088 lat (usec): min=1768, max=30701, avg=11043.83, stdev=3920.86 00:17:54.088 clat percentiles (usec): 00:17:54.088 | 1.00th=[ 2737], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8291], 00:17:54.088 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:17:54.088 | 70.00th=[12387], 80.00th=[13829], 90.00th=[15664], 95.00th=[18744], 00:17:54.088 | 99.00th=[23200], 99.50th=[25560], 99.90th=[30540], 99.95th=[30802], 00:17:54.088 | 99.99th=[30802] 00:17:54.088 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:17:54.088 slat (nsec): min=1962, max=36870k, avg=86720.27, stdev=735677.01 00:17:54.088 clat (usec): min=982, max=46405, avg=11513.05, stdev=4506.26 00:17:54.088 lat (usec): min=991, max=46418, avg=11599.77, stdev=4554.06 00:17:54.088 clat percentiles (usec): 00:17:54.088 | 1.00th=[ 3294], 5.00th=[ 5211], 10.00th=[ 6587], 20.00th=[ 7767], 00:17:54.088 | 30.00th=[ 8455], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[12387], 00:17:54.088 | 70.00th=[13698], 80.00th=[15664], 90.00th=[17171], 95.00th=[19006], 00:17:54.088 | 99.00th=[23725], 99.50th=[28443], 99.90th=[30802], 99.95th=[38011], 00:17:54.088 | 99.99th=[46400] 00:17:54.088 bw ( KiB/s): min=22211, max=22800, per=31.94%, avg=22505.50, stdev=416.49, samples=2 00:17:54.088 iops : min= 5552, max= 5700, avg=5626.00, stdev=104.65, samples=2 00:17:54.088 lat (usec) : 1000=0.03% 00:17:54.088 lat (msec) : 2=0.29%, 4=1.75%, 10=44.99%, 20=49.25%, 50=3.70% 00:17:54.088 cpu : usr=3.49%, sys=5.28%, ctx=488, majf=0, minf=1 00:17:54.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:54.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.088 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.088 00:17:54.088 Run status group 0 (all jobs): 00:17:54.088 READ: bw=65.6MiB/s (68.8MB/s), 9.96MiB/s-20.4MiB/s (10.4MB/s-21.4MB/s), io=65.9MiB (69.1MB), run=1003-1005msec 00:17:54.088 WRITE: bw=68.8MiB/s (72.2MB/s), 10.7MiB/s-21.9MiB/s (11.2MB/s-23.0MB/s), io=69.2MiB (72.5MB), run=1003-1005msec 00:17:54.088 00:17:54.088 Disk stats (read/write): 00:17:54.088 nvme0n1: ios=3639/3645, merge=0/0, ticks=45251/47538, in_queue=92789, util=91.28% 00:17:54.088 nvme0n2: ios=4006/4096, merge=0/0, ticks=47167/45919, in_queue=93086, util=91.15% 00:17:54.088 nvme0n3: ios=1845/2048, merge=0/0, ticks=25231/26298, in_queue=51529, util=99.37% 00:17:54.088 nvme0n4: ios=4644/4924, merge=0/0, ticks=46685/53427, in_queue=100112, util=99.68% 00:17:54.088 08:18:27 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:54.088 [global] 00:17:54.088 thread=1 00:17:54.088 invalidate=1 00:17:54.088 rw=randwrite 00:17:54.088 time_based=1 00:17:54.088 runtime=1 00:17:54.088 ioengine=libaio 00:17:54.088 direct=1 00:17:54.088 bs=4096 00:17:54.088 iodepth=128 00:17:54.088 norandommap=0 00:17:54.088 numjobs=1 00:17:54.088 00:17:54.088 verify_dump=1 00:17:54.088 verify_backlog=512 00:17:54.088 verify_state_save=0 00:17:54.088 do_verify=1 00:17:54.088 verify=crc32c-intel 00:17:54.088 [job0] 00:17:54.088 filename=/dev/nvme0n1 00:17:54.088 [job1] 00:17:54.088 filename=/dev/nvme0n2 00:17:54.088 [job2] 00:17:54.088 filename=/dev/nvme0n3 00:17:54.088 [job3] 00:17:54.088 filename=/dev/nvme0n4 00:17:54.088 Could not set queue depth (nvme0n1) 00:17:54.088 Could not set queue depth (nvme0n2) 00:17:54.088 Could not set queue depth (nvme0n3) 00:17:54.088 Could not set queue depth (nvme0n4) 00:17:54.388 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.388 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.388 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.388 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.388 fio-3.35 00:17:54.388 Starting 4 threads 00:17:55.338 00:17:55.338 job0: (groupid=0, jobs=1): err= 0: pid=2268603: Tue Feb 13 08:18:29 2024 00:17:55.338 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:55.338 slat (nsec): min=1030, max=19035k, avg=101111.61, stdev=769502.47 00:17:55.338 clat (usec): min=3140, max=36471, avg=14200.63, stdev=5410.79 00:17:55.338 lat (usec): min=3145, max=36475, avg=14301.74, stdev=5453.76 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9634], 00:17:55.338 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13173], 60.00th=[14091], 00:17:55.338 | 70.00th=[15533], 80.00th=[17695], 90.00th=[22414], 95.00th=[25560], 00:17:55.338 | 99.00th=[28967], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:17:55.338 | 99.99th=[36439] 00:17:55.338 write: IOPS=4892, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1003msec); 0 zone resets 00:17:55.338 slat (nsec): min=1899, max=17750k, avg=91690.21, stdev=651511.63 00:17:55.338 clat (usec): min=1225, max=32315, avg=12495.25, stdev=4448.58 00:17:55.338 lat (usec): min=1235, max=32373, avg=12586.94, stdev=4468.95 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 3752], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 9241], 00:17:55.338 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11994], 60.00th=[13042], 00:17:55.338 | 70.00th=[14222], 80.00th=[15139], 90.00th=[18744], 95.00th=[21103], 00:17:55.338 | 99.00th=[23987], 99.50th=[27132], 99.90th=[31589], 99.95th=[31589], 00:17:55.338 | 99.99th=[32375] 00:17:55.338 bw ( KiB/s): min=17760, max=20480, per=25.91%, avg=19120.00, stdev=1923.33, samples=2 00:17:55.338 iops : min= 4440, max= 5120, avg=4780.00, stdev=480.83, samples=2 00:17:55.338 lat (msec) : 2=0.04%, 4=0.85%, 10=23.77%, 20=64.68%, 50=10.66% 00:17:55.338 cpu : usr=2.59%, sys=5.49%, ctx=462, majf=0, minf=1 00:17:55.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:55.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.338 issued rwts: total=4608,4907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.338 job1: (groupid=0, jobs=1): err= 0: pid=2268604: Tue Feb 13 08:18:29 2024 00:17:55.338 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:17:55.338 slat (nsec): min=1036, max=15499k, avg=122623.56, stdev=766301.39 00:17:55.338 clat (usec): min=6524, max=66884, avg=15319.91, stdev=7347.91 00:17:55.338 lat (usec): min=6527, max=79009, avg=15442.53, stdev=7425.64 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 7308], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10421], 00:17:55.338 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12387], 60.00th=[13960], 00:17:55.338 | 70.00th=[16188], 80.00th=[20055], 90.00th=[24249], 95.00th=[28443], 00:17:55.338 | 99.00th=[41157], 99.50th=[49546], 99.90th=[66847], 99.95th=[66847], 00:17:55.338 | 99.99th=[66847] 00:17:55.338 write: IOPS=3835, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:17:55.338 slat (usec): min=2, max=13116, avg=141.75, stdev=750.48 00:17:55.338 clat (usec): min=293, max=66003, avg=18642.62, stdev=13701.74 00:17:55.338 lat (usec): min=1718, max=66010, avg=18784.36, stdev=13791.49 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 3720], 5.00th=[ 8029], 10.00th=[ 9372], 20.00th=[10028], 00:17:55.338 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12518], 60.00th=[13829], 00:17:55.338 | 70.00th=[16319], 80.00th=[27395], 90.00th=[38536], 95.00th=[54789], 00:17:55.338 | 99.00th=[62129], 99.50th=[63701], 99.90th=[65799], 99.95th=[65799], 00:17:55.338 | 99.99th=[65799] 00:17:55.338 bw ( KiB/s): min=12288, max=12288, per=16.65%, avg=12288.00, stdev= 0.00, samples=1 00:17:55.338 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:55.338 lat (usec) : 500=0.01% 00:17:55.338 lat (msec) : 2=0.03%, 4=0.57%, 10=15.52%, 20=60.65%, 50=19.64% 00:17:55.338 lat (msec) : 100=3.58% 00:17:55.338 cpu : usr=1.40%, sys=3.50%, ctx=454, majf=0, minf=1 00:17:55.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:55.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.338 issued rwts: total=3584,3839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.338 job2: (groupid=0, jobs=1): err= 0: pid=2268605: Tue Feb 13 08:18:29 2024 00:17:55.338 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1017msec) 00:17:55.338 slat (nsec): min=1497, max=12755k, avg=111409.43, stdev=733619.07 00:17:55.338 clat (usec): min=4446, max=31831, avg=14650.95, stdev=5132.57 00:17:55.338 lat (usec): min=4453, max=31836, avg=14762.36, stdev=5156.67 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10290], 00:17:55.338 | 30.00th=[11076], 40.00th=[11731], 50.00th=[13304], 60.00th=[15008], 00:17:55.338 | 70.00th=[16319], 80.00th=[19268], 90.00th=[22152], 95.00th=[25297], 00:17:55.338 | 99.00th=[29230], 99.50th=[31851], 99.90th=[31851], 99.95th=[31851], 00:17:55.338 | 99.99th=[31851] 00:17:55.338 write: IOPS=4763, BW=18.6MiB/s (19.5MB/s)(18.9MiB/1017msec); 0 zone resets 00:17:55.338 slat (usec): min=2, max=10075, avg=95.26, stdev=584.01 00:17:55.338 clat (usec): min=2104, max=35514, avg=12687.77, stdev=5764.25 00:17:55.338 lat (usec): min=2116, max=35522, avg=12783.03, stdev=5774.02 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 8291], 00:17:55.338 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11338], 60.00th=[12649], 00:17:55.338 | 70.00th=[13960], 80.00th=[15795], 90.00th=[20841], 95.00th=[26084], 00:17:55.338 | 99.00th=[30540], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:17:55.338 | 99.99th=[35390] 00:17:55.338 bw ( KiB/s): min=17264, max=20472, per=25.57%, avg=18868.00, stdev=2268.40, samples=2 00:17:55.338 iops : min= 4316, max= 5118, avg=4717.00, stdev=567.10, samples=2 00:17:55.338 lat (msec) : 4=0.20%, 10=26.39%, 20=60.50%, 50=12.92% 00:17:55.338 cpu : usr=3.94%, sys=4.43%, ctx=493, majf=0, minf=1 00:17:55.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:55.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.338 issued rwts: total=4608,4844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.338 job3: (groupid=0, jobs=1): err= 0: pid=2268606: Tue Feb 13 08:18:29 2024 00:17:55.338 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:17:55.338 slat (nsec): min=1515, max=19118k, avg=98534.64, stdev=761361.61 00:17:55.338 clat (usec): min=7085, max=38120, avg=13380.63, stdev=4691.48 00:17:55.338 lat (usec): min=7091, max=38147, avg=13479.16, stdev=4731.56 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[10159], 00:17:55.338 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11863], 60.00th=[13042], 00:17:55.338 | 70.00th=[14091], 80.00th=[16319], 90.00th=[18744], 95.00th=[22152], 00:17:55.338 | 99.00th=[32637], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:17:55.338 | 99.99th=[38011] 00:17:55.338 write: IOPS=5131, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1008msec); 0 zone resets 00:17:55.338 slat (usec): min=2, max=12452, avg=90.81, stdev=610.53 00:17:55.338 clat (usec): min=1242, max=35097, avg=11440.25, stdev=3916.62 00:17:55.338 lat (usec): min=1655, max=35102, avg=11531.07, stdev=3930.91 00:17:55.338 clat percentiles (usec): 00:17:55.338 | 1.00th=[ 5276], 5.00th=[ 5800], 10.00th=[ 6849], 20.00th=[ 8094], 00:17:55.338 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12125], 00:17:55.338 | 70.00th=[13304], 80.00th=[15008], 90.00th=[16319], 95.00th=[17957], 00:17:55.338 | 99.00th=[23200], 99.50th=[27132], 99.90th=[27132], 99.95th=[27919], 00:17:55.338 | 99.99th=[34866] 00:17:55.338 bw ( KiB/s): min=20480, max=20480, per=27.75%, avg=20480.00, stdev= 0.00, samples=2 00:17:55.338 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:17:55.338 lat (msec) : 2=0.04%, 4=0.21%, 10=28.11%, 20=66.58%, 50=5.06% 00:17:55.338 cpu : usr=3.77%, sys=6.26%, ctx=414, majf=0, minf=1 00:17:55.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:55.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.338 issued rwts: total=5120,5173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.338 00:17:55.338 Run status group 0 (all jobs): 00:17:55.338 READ: bw=68.8MiB/s (72.2MB/s), 14.0MiB/s-19.8MiB/s (14.7MB/s-20.8MB/s), io=70.0MiB (73.4MB), run=1001-1017msec 00:17:55.338 WRITE: bw=72.1MiB/s (75.6MB/s), 15.0MiB/s-20.0MiB/s (15.7MB/s-21.0MB/s), io=73.3MiB (76.9MB), run=1001-1017msec 00:17:55.338 00:17:55.338 Disk stats (read/write): 00:17:55.338 nvme0n1: ios=3930/4096, merge=0/0, ticks=46852/40287, in_queue=87139, util=99.70% 00:17:55.338 nvme0n2: ios=3072/3110, merge=0/0, ticks=18966/20913, in_queue=39879, util=94.78% 00:17:55.338 nvme0n3: ios=3759/4096, merge=0/0, ticks=55517/48911, in_queue=104428, util=95.68% 00:17:55.338 nvme0n4: ios=4116/4323, merge=0/0, ticks=56322/47878, in_queue=104200, util=97.13% 00:17:55.338 08:18:29 -- target/fio.sh@55 -- # sync 00:17:55.338 08:18:29 -- target/fio.sh@59 -- # fio_pid=2268838 00:17:55.338 08:18:29 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:55.338 08:18:29 -- target/fio.sh@61 -- # sleep 3 00:17:55.627 [global] 00:17:55.627 thread=1 00:17:55.627 invalidate=1 00:17:55.627 rw=read 00:17:55.627 time_based=1 00:17:55.627 runtime=10 00:17:55.627 ioengine=libaio 00:17:55.627 direct=1 00:17:55.627 bs=4096 00:17:55.627 iodepth=1 00:17:55.627 norandommap=1 00:17:55.628 numjobs=1 00:17:55.628 00:17:55.628 [job0] 00:17:55.628 filename=/dev/nvme0n1 00:17:55.628 [job1] 00:17:55.628 filename=/dev/nvme0n2 00:17:55.628 [job2] 00:17:55.628 filename=/dev/nvme0n3 00:17:55.628 [job3] 00:17:55.628 filename=/dev/nvme0n4 00:17:55.628 Could not set queue depth (nvme0n1) 00:17:55.628 Could not set queue depth (nvme0n2) 00:17:55.628 Could not set queue depth (nvme0n3) 00:17:55.628 Could not set queue depth (nvme0n4) 00:17:55.889 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.889 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.889 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.889 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.889 fio-3.35 00:17:55.889 Starting 4 threads 00:17:58.415 08:18:32 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:58.672 08:18:32 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:58.672 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21774336, buflen=4096 00:17:58.673 fio: pid=2268987, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:58.930 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=22691840, buflen=4096 00:17:58.930 fio: pid=2268986, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:58.930 08:18:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:58.930 08:18:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:58.930 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1855488, buflen=4096 00:17:58.930 fio: pid=2268984, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:58.930 08:18:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:58.930 08:18:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:59.188 08:18:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:59.188 08:18:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:59.188 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11341824, buflen=4096 00:17:59.188 fio: pid=2268985, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:59.188 00:17:59.188 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2268984: Tue Feb 13 08:18:32 2024 00:17:59.188 read: IOPS=148, BW=592KiB/s (606kB/s)(1812KiB/3062msec) 00:17:59.188 slat (usec): min=2, max=555, avg=11.22, stdev=26.13 00:17:59.188 clat (usec): min=513, max=43991, avg=6677.62, stdev=14475.84 00:17:59.188 lat (usec): min=521, max=44001, avg=6688.84, stdev=14483.22 00:17:59.188 clat percentiles (usec): 00:17:59.188 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 562], 00:17:59.188 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 611], 00:17:59.188 | 70.00th=[ 627], 80.00th=[ 709], 90.00th=[41157], 95.00th=[41157], 00:17:59.188 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:17:59.188 | 99.99th=[43779] 00:17:59.188 bw ( KiB/s): min= 96, max= 2704, per=3.79%, avg=651.20, stdev=1149.85, samples=5 00:17:59.188 iops : min= 24, max= 676, avg=162.80, stdev=287.46, samples=5 00:17:59.188 lat (usec) : 750=82.38%, 1000=1.76% 00:17:59.188 lat (msec) : 2=0.66%, 50=14.98% 00:17:59.188 cpu : usr=0.13%, sys=0.26%, ctx=455, majf=0, minf=1 00:17:59.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.188 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.188 issued rwts: total=454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.188 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2268985: Tue Feb 13 08:18:32 2024 00:17:59.188 read: IOPS=845, BW=3382KiB/s (3463kB/s)(10.8MiB/3275msec) 00:17:59.188 slat (usec): min=4, max=9561, avg=12.33, stdev=194.89 00:17:59.188 clat (usec): min=279, max=43005, avg=1161.40, stdev=5230.46 00:17:59.188 lat (usec): min=286, max=50811, avg=1173.72, stdev=5272.46 00:17:59.188 clat percentiles (usec): 00:17:59.188 | 1.00th=[ 383], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 465], 00:17:59.188 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 486], 00:17:59.188 | 70.00th=[ 490], 80.00th=[ 494], 90.00th=[ 502], 95.00th=[ 537], 00:17:59.188 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:17:59.188 | 99.99th=[43254] 00:17:59.188 bw ( KiB/s): min= 96, max= 8256, per=21.12%, avg=3632.83, stdev=3892.85, samples=6 00:17:59.188 iops : min= 24, max= 2064, avg=908.17, stdev=973.26, samples=6 00:17:59.188 lat (usec) : 500=87.33%, 750=10.36%, 1000=0.54% 00:17:59.188 lat (msec) : 2=0.07%, 50=1.66% 00:17:59.188 cpu : usr=0.31%, sys=0.73%, ctx=2772, majf=0, minf=1 00:17:59.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.188 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.189 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2268986: Tue Feb 13 08:18:32 2024 00:17:59.189 read: IOPS=1929, BW=7719KiB/s (7904kB/s)(21.6MiB/2871msec) 00:17:59.189 slat (usec): min=4, max=10074, avg=11.05, stdev=174.71 00:17:59.189 clat (usec): min=282, max=42052, avg=502.20, stdev=1996.20 00:17:59.189 lat (usec): min=290, max=42075, avg=513.25, stdev=2004.92 00:17:59.189 clat percentiles (usec): 00:17:59.189 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:17:59.189 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 396], 00:17:59.189 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 498], 95.00th=[ 570], 00:17:59.189 | 99.00th=[ 660], 99.50th=[ 758], 99.90th=[41681], 99.95th=[42206], 00:17:59.189 | 99.99th=[42206] 00:17:59.189 bw ( KiB/s): min= 808, max=11552, per=45.76%, avg=7868.80, stdev=4137.19, samples=5 00:17:59.189 iops : min= 202, max= 2888, avg=1967.20, stdev=1034.30, samples=5 00:17:59.189 lat (usec) : 500=90.85%, 750=8.61%, 1000=0.22% 00:17:59.189 lat (msec) : 2=0.04%, 4=0.02%, 50=0.25% 00:17:59.189 cpu : usr=0.80%, sys=1.53%, ctx=5545, majf=0, minf=1 00:17:59.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.189 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.189 issued rwts: total=5541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.189 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2268987: Tue Feb 13 08:18:32 2024 00:17:59.189 read: IOPS=1968, BW=7873KiB/s (8062kB/s)(20.8MiB/2701msec) 00:17:59.189 slat (nsec): min=6230, max=36435, avg=7568.59, stdev=1630.20 00:17:59.189 clat (usec): min=257, max=42086, avg=492.57, stdev=1943.66 00:17:59.189 lat (usec): min=264, max=42109, avg=500.14, stdev=1944.32 00:17:59.189 clat percentiles (usec): 00:17:59.189 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 326], 20.00th=[ 334], 00:17:59.189 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 375], 00:17:59.189 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 498], 95.00th=[ 510], 00:17:59.189 | 99.00th=[ 676], 99.50th=[ 766], 99.90th=[41681], 99.95th=[42206], 00:17:59.189 | 99.99th=[42206] 00:17:59.189 bw ( KiB/s): min= 512, max=11552, per=45.92%, avg=7896.00, stdev=4307.28, samples=5 00:17:59.189 iops : min= 128, max= 2888, avg=1974.00, stdev=1076.82, samples=5 00:17:59.189 lat (usec) : 500=91.57%, 750=7.88%, 1000=0.28% 00:17:59.189 lat (msec) : 50=0.24% 00:17:59.189 cpu : usr=0.52%, sys=1.85%, ctx=5317, majf=0, minf=2 00:17:59.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.189 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.189 issued rwts: total=5317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.189 00:17:59.189 Run status group 0 (all jobs): 00:17:59.189 READ: bw=16.8MiB/s (17.6MB/s), 592KiB/s-7873KiB/s (606kB/s-8062kB/s), io=55.0MiB (57.7MB), run=2701-3275msec 00:17:59.189 00:17:59.189 Disk stats (read/write): 00:17:59.189 nvme0n1: ios=416/0, merge=0/0, ticks=2830/0, in_queue=2830, util=95.16% 00:17:59.189 nvme0n2: ios=2729/0, merge=0/0, ticks=3052/0, in_queue=3052, util=95.95% 00:17:59.189 nvme0n3: ios=5581/0, merge=0/0, ticks=2920/0, in_queue=2920, util=98.48% 00:17:59.189 nvme0n4: ios=5124/0, merge=0/0, ticks=2528/0, in_queue=2528, util=96.44% 00:17:59.447 08:18:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:59.447 08:18:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:59.704 08:18:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:59.704 08:18:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:59.704 08:18:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:59.704 08:18:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:59.961 08:18:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:59.961 08:18:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:00.219 08:18:33 -- target/fio.sh@69 -- # fio_status=0 00:18:00.219 08:18:33 -- target/fio.sh@70 -- # wait 2268838 00:18:00.219 08:18:33 -- target/fio.sh@70 -- # fio_status=4 00:18:00.219 08:18:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:00.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.219 08:18:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:00.219 08:18:33 -- common/autotest_common.sh@1196 -- # local i=0 00:18:00.219 08:18:33 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:18:00.219 08:18:33 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.219 08:18:33 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:00.219 08:18:33 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.219 08:18:33 -- common/autotest_common.sh@1208 -- # return 0 00:18:00.219 08:18:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:00.219 08:18:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:00.219 nvmf hotplug test: fio failed as expected 00:18:00.219 08:18:33 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.477 08:18:34 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:00.477 08:18:34 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:00.477 08:18:34 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:00.477 08:18:34 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:00.477 08:18:34 -- target/fio.sh@91 -- # nvmftestfini 00:18:00.477 08:18:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:00.477 08:18:34 -- nvmf/common.sh@116 -- # sync 00:18:00.477 08:18:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:00.477 08:18:34 -- nvmf/common.sh@119 -- # set +e 00:18:00.477 08:18:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:00.477 08:18:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:00.477 rmmod nvme_tcp 00:18:00.477 rmmod nvme_fabrics 00:18:00.477 rmmod nvme_keyring 00:18:00.477 08:18:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:00.477 08:18:34 -- nvmf/common.sh@123 -- # set -e 00:18:00.477 08:18:34 -- nvmf/common.sh@124 -- # return 0 00:18:00.477 08:18:34 -- nvmf/common.sh@477 -- # '[' -n 2265961 ']' 00:18:00.477 08:18:34 -- nvmf/common.sh@478 -- # killprocess 2265961 00:18:00.477 08:18:34 -- common/autotest_common.sh@924 -- # '[' -z 2265961 ']' 00:18:00.477 08:18:34 -- common/autotest_common.sh@928 -- # kill -0 2265961 00:18:00.477 08:18:34 -- common/autotest_common.sh@929 -- # uname 00:18:00.477 08:18:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:00.477 08:18:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2265961 00:18:00.477 08:18:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:00.477 08:18:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:00.477 08:18:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2265961' 00:18:00.477 killing process with pid 2265961 00:18:00.477 08:18:34 -- common/autotest_common.sh@943 -- # kill 2265961 00:18:00.477 08:18:34 -- common/autotest_common.sh@948 -- # wait 2265961 00:18:00.735 08:18:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:00.735 08:18:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:00.735 08:18:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:00.735 08:18:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.735 08:18:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:00.735 08:18:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.735 08:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.735 08:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.268 08:18:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:03.268 00:18:03.268 real 0m26.755s 00:18:03.268 user 1m46.042s 00:18:03.268 sys 0m8.293s 00:18:03.268 08:18:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:03.268 08:18:36 -- common/autotest_common.sh@10 -- # set +x 00:18:03.268 ************************************ 00:18:03.268 END TEST nvmf_fio_target 00:18:03.268 ************************************ 00:18:03.268 08:18:36 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:03.268 08:18:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:18:03.268 08:18:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:03.268 08:18:36 -- common/autotest_common.sh@10 -- # set +x 00:18:03.268 ************************************ 00:18:03.268 START TEST nvmf_bdevio 00:18:03.268 ************************************ 00:18:03.268 08:18:36 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:03.268 * Looking for test storage... 00:18:03.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.268 08:18:36 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.268 08:18:36 -- nvmf/common.sh@7 -- # uname -s 00:18:03.268 08:18:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.268 08:18:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.268 08:18:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.268 08:18:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.268 08:18:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.268 08:18:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.268 08:18:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.268 08:18:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.268 08:18:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.268 08:18:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.268 08:18:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:03.268 08:18:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:03.268 08:18:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.268 08:18:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.268 08:18:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.268 08:18:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.268 08:18:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.268 08:18:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.268 08:18:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.268 08:18:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.268 08:18:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.268 08:18:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.268 08:18:36 -- paths/export.sh@5 -- # export PATH 00:18:03.268 08:18:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.268 08:18:36 -- nvmf/common.sh@46 -- # : 0 00:18:03.268 08:18:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.268 08:18:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.268 08:18:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.268 08:18:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.268 08:18:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.268 08:18:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.268 08:18:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.268 08:18:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.268 08:18:36 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.268 08:18:36 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.268 08:18:36 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:03.268 08:18:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:03.268 08:18:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.268 08:18:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.268 08:18:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.268 08:18:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.268 08:18:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.268 08:18:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.268 08:18:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.268 08:18:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:03.268 08:18:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:03.269 08:18:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:03.269 08:18:36 -- common/autotest_common.sh@10 -- # set +x 00:18:09.831 08:18:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.831 08:18:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:09.831 08:18:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:09.831 08:18:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:09.831 08:18:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:09.832 08:18:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:09.832 08:18:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:09.832 08:18:42 -- nvmf/common.sh@294 -- # net_devs=() 00:18:09.832 08:18:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:09.832 08:18:42 -- nvmf/common.sh@295 -- # e810=() 00:18:09.832 08:18:42 -- nvmf/common.sh@295 -- # local -ga e810 00:18:09.832 08:18:42 -- nvmf/common.sh@296 -- # x722=() 00:18:09.832 08:18:42 -- nvmf/common.sh@296 -- # local -ga x722 00:18:09.832 08:18:42 -- nvmf/common.sh@297 -- # mlx=() 00:18:09.832 08:18:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:09.832 08:18:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.832 08:18:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.832 08:18:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:09.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:09.832 08:18:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.832 08:18:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:09.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:09.832 08:18:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.832 08:18:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.832 08:18:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.832 08:18:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:09.832 Found net devices under 0000:af:00.0: cvl_0_0 00:18:09.832 08:18:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.832 08:18:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.832 08:18:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.832 08:18:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:09.832 Found net devices under 0000:af:00.1: cvl_0_1 00:18:09.832 08:18:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:09.832 08:18:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:09.832 08:18:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.832 08:18:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.832 08:18:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:09.832 08:18:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.832 08:18:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.832 08:18:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:09.832 08:18:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.832 08:18:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.832 08:18:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:09.832 08:18:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:09.832 08:18:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.832 08:18:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.832 08:18:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.832 08:18:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.832 08:18:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:09.832 08:18:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.832 08:18:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.832 08:18:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.832 08:18:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:09.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:18:09.832 00:18:09.832 --- 10.0.0.2 ping statistics --- 00:18:09.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.832 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:09.832 08:18:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:18:09.832 00:18:09.832 --- 10.0.0.1 ping statistics --- 00:18:09.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.832 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:09.832 08:18:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.832 08:18:42 -- nvmf/common.sh@410 -- # return 0 00:18:09.832 08:18:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:09.832 08:18:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.832 08:18:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:09.832 08:18:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.832 08:18:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:09.832 08:18:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:09.832 08:18:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:09.832 08:18:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.832 08:18:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:09.832 08:18:42 -- common/autotest_common.sh@10 -- # set +x 00:18:09.832 08:18:42 -- nvmf/common.sh@469 -- # nvmfpid=2273673 00:18:09.832 08:18:42 -- nvmf/common.sh@470 -- # waitforlisten 2273673 00:18:09.832 08:18:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:09.832 08:18:42 -- common/autotest_common.sh@817 -- # '[' -z 2273673 ']' 00:18:09.832 08:18:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.832 08:18:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.832 08:18:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.832 08:18:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.832 08:18:42 -- common/autotest_common.sh@10 -- # set +x 00:18:09.832 [2024-02-13 08:18:42.808797] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:09.832 [2024-02-13 08:18:42.808840] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.832 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.832 [2024-02-13 08:18:42.874589] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.832 [2024-02-13 08:18:42.948046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.832 [2024-02-13 08:18:42.948157] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.832 [2024-02-13 08:18:42.948165] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.832 [2024-02-13 08:18:42.948175] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.832 [2024-02-13 08:18:42.948291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:09.832 [2024-02-13 08:18:42.948398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:09.832 [2024-02-13 08:18:42.948503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.832 [2024-02-13 08:18:42.948505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:10.089 08:18:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.089 08:18:43 -- common/autotest_common.sh@850 -- # return 0 00:18:10.089 08:18:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.089 08:18:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 08:18:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.089 08:18:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:10.089 08:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 [2024-02-13 08:18:43.645872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.089 08:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.089 08:18:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:10.089 08:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 Malloc0 00:18:10.089 08:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.089 08:18:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:10.089 08:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 08:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.089 08:18:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.089 08:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 08:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.089 08:18:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.089 08:18:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.089 08:18:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.089 [2024-02-13 08:18:43.697217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.089 08:18:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.089 08:18:43 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:10.089 08:18:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:10.089 08:18:43 -- nvmf/common.sh@520 -- # config=() 00:18:10.089 08:18:43 -- nvmf/common.sh@520 -- # local subsystem config 00:18:10.089 08:18:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:10.089 08:18:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:10.089 { 00:18:10.089 "params": { 00:18:10.089 "name": "Nvme$subsystem", 00:18:10.089 "trtype": "$TEST_TRANSPORT", 00:18:10.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.089 "adrfam": "ipv4", 00:18:10.089 "trsvcid": "$NVMF_PORT", 00:18:10.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.089 "hdgst": ${hdgst:-false}, 00:18:10.089 "ddgst": ${ddgst:-false} 00:18:10.089 }, 00:18:10.089 "method": "bdev_nvme_attach_controller" 00:18:10.089 } 00:18:10.089 EOF 00:18:10.089 )") 00:18:10.089 08:18:43 -- nvmf/common.sh@542 -- # cat 00:18:10.089 08:18:43 -- nvmf/common.sh@544 -- # jq . 00:18:10.089 08:18:43 -- nvmf/common.sh@545 -- # IFS=, 00:18:10.089 08:18:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:10.089 "params": { 00:18:10.089 "name": "Nvme1", 00:18:10.089 "trtype": "tcp", 00:18:10.089 "traddr": "10.0.0.2", 00:18:10.089 "adrfam": "ipv4", 00:18:10.089 "trsvcid": "4420", 00:18:10.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.089 "hdgst": false, 00:18:10.089 "ddgst": false 00:18:10.089 }, 00:18:10.089 "method": "bdev_nvme_attach_controller" 00:18:10.089 }' 00:18:10.089 [2024-02-13 08:18:43.743018] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:10.090 [2024-02-13 08:18:43.743058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273744 ] 00:18:10.090 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.346 [2024-02-13 08:18:43.804078] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.346 [2024-02-13 08:18:43.874859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.346 [2024-02-13 08:18:43.874954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.346 [2024-02-13 08:18:43.874956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.346 [2024-02-13 08:18:43.875035] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:10.602 [2024-02-13 08:18:44.146055] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:10.602 [2024-02-13 08:18:44.146087] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:10.602 I/O targets: 00:18:10.602 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:10.602 00:18:10.602 00:18:10.602 CUnit - A unit testing framework for C - Version 2.1-3 00:18:10.602 http://cunit.sourceforge.net/ 00:18:10.602 00:18:10.602 00:18:10.602 Suite: bdevio tests on: Nvme1n1 00:18:10.602 Test: blockdev write read block ...passed 00:18:10.602 Test: blockdev write zeroes read block ...passed 00:18:10.602 Test: blockdev write zeroes read no split ...passed 00:18:10.859 Test: blockdev write zeroes read split ...passed 00:18:10.859 Test: blockdev write zeroes read split partial ...passed 00:18:10.859 Test: blockdev reset ...[2024-02-13 08:18:44.369521] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:10.859 [2024-02-13 08:18:44.369571] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1828c70 (9): Bad file descriptor 00:18:10.859 [2024-02-13 08:18:44.465921] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.859 passed 00:18:10.859 Test: blockdev write read 8 blocks ...passed 00:18:10.859 Test: blockdev write read size > 128k ...passed 00:18:10.859 Test: blockdev write read invalid size ...passed 00:18:10.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.859 Test: blockdev write read max offset ...passed 00:18:11.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:11.115 Test: blockdev writev readv 8 blocks ...passed 00:18:11.115 Test: blockdev writev readv 30 x 1block ...passed 00:18:11.115 Test: blockdev writev readv block ...passed 00:18:11.115 Test: blockdev writev readv size > 128k ...passed 00:18:11.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:11.115 Test: blockdev comparev and writev ...[2024-02-13 08:18:44.645561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.115 [2024-02-13 08:18:44.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.115 [2024-02-13 08:18:44.645599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.115 [2024-02-13 08:18:44.645607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.115 [2024-02-13 08:18:44.645967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.115 [2024-02-13 08:18:44.645978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:11.115 [2024-02-13 08:18:44.645990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.115 [2024-02-13 08:18:44.645998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:11.115 [2024-02-13 08:18:44.646351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.115 [2024-02-13 08:18:44.646367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:11.115 [2024-02-13 08:18:44.646379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.116 [2024-02-13 08:18:44.646387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:11.116 [2024-02-13 08:18:44.646759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.116 [2024-02-13 08:18:44.646770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:11.116 [2024-02-13 08:18:44.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:11.116 [2024-02-13 08:18:44.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:11.116 passed 00:18:11.116 Test: blockdev nvme passthru rw ...passed 00:18:11.116 Test: blockdev nvme passthru vendor specific ...[2024-02-13 08:18:44.729279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:11.116 [2024-02-13 08:18:44.729295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:11.116 [2024-02-13 08:18:44.729535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:11.116 [2024-02-13 08:18:44.729546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:11.116 [2024-02-13 08:18:44.729786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:11.116 [2024-02-13 08:18:44.729796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:11.116 [2024-02-13 08:18:44.730032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:11.116 [2024-02-13 08:18:44.730042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:11.116 passed 00:18:11.116 Test: blockdev nvme admin passthru ...passed 00:18:11.116 Test: blockdev copy ...passed 00:18:11.116 00:18:11.116 Run Summary: Type Total Ran Passed Failed Inactive 00:18:11.116 suites 1 1 n/a 0 0 00:18:11.116 tests 23 23 23 0 0 00:18:11.116 asserts 152 152 152 0 n/a 00:18:11.116 00:18:11.116 Elapsed time = 1.270 seconds 00:18:11.116 [2024-02-13 08:18:44.782097] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:11.373 08:18:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.373 08:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.373 08:18:44 -- common/autotest_common.sh@10 -- # set +x 00:18:11.373 08:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.373 08:18:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:11.373 08:18:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:11.373 08:18:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:11.373 08:18:44 -- nvmf/common.sh@116 -- # sync 00:18:11.373 08:18:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:11.373 08:18:44 -- nvmf/common.sh@119 -- # set +e 00:18:11.373 08:18:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:11.373 08:18:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:11.373 rmmod nvme_tcp 00:18:11.373 rmmod nvme_fabrics 00:18:11.373 rmmod nvme_keyring 00:18:11.373 08:18:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:11.373 08:18:45 -- nvmf/common.sh@123 -- # set -e 00:18:11.373 08:18:45 -- nvmf/common.sh@124 -- # return 0 00:18:11.373 08:18:45 -- nvmf/common.sh@477 -- # '[' -n 2273673 ']' 00:18:11.373 08:18:45 -- nvmf/common.sh@478 -- # killprocess 2273673 00:18:11.373 08:18:45 -- common/autotest_common.sh@924 -- # '[' -z 2273673 ']' 00:18:11.373 08:18:45 -- common/autotest_common.sh@928 -- # kill -0 2273673 00:18:11.373 08:18:45 -- common/autotest_common.sh@929 -- # uname 00:18:11.373 08:18:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:11.373 08:18:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2273673 00:18:11.630 08:18:45 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:18:11.630 08:18:45 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:18:11.630 08:18:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2273673' 00:18:11.630 killing process with pid 2273673 00:18:11.630 08:18:45 -- common/autotest_common.sh@943 -- # kill 2273673 00:18:11.630 08:18:45 -- common/autotest_common.sh@948 -- # wait 2273673 00:18:11.630 08:18:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.630 08:18:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.630 08:18:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.630 08:18:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.630 08:18:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.630 08:18:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.630 08:18:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.630 08:18:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.158 08:18:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:14.158 00:18:14.158 real 0m10.912s 00:18:14.158 user 0m13.468s 00:18:14.158 sys 0m5.181s 00:18:14.158 08:18:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:14.158 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:18:14.158 ************************************ 00:18:14.158 END TEST nvmf_bdevio 00:18:14.158 ************************************ 00:18:14.158 08:18:47 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:18:14.158 08:18:47 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:14.158 08:18:47 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:18:14.158 08:18:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:14.158 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:18:14.158 ************************************ 00:18:14.158 START TEST nvmf_bdevio_no_huge 00:18:14.158 ************************************ 00:18:14.158 08:18:47 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:14.158 * Looking for test storage... 00:18:14.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.158 08:18:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.158 08:18:47 -- nvmf/common.sh@7 -- # uname -s 00:18:14.158 08:18:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.158 08:18:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.158 08:18:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.158 08:18:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.158 08:18:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.158 08:18:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.158 08:18:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.158 08:18:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.158 08:18:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.158 08:18:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.158 08:18:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:14.158 08:18:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:14.158 08:18:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.158 08:18:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.158 08:18:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.158 08:18:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.158 08:18:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.158 08:18:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.158 08:18:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.158 08:18:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.158 08:18:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.158 08:18:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.158 08:18:47 -- paths/export.sh@5 -- # export PATH 00:18:14.158 08:18:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.158 08:18:47 -- nvmf/common.sh@46 -- # : 0 00:18:14.158 08:18:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.158 08:18:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.158 08:18:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.158 08:18:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.158 08:18:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.158 08:18:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.158 08:18:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.158 08:18:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.158 08:18:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.158 08:18:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.158 08:18:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:14.158 08:18:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.158 08:18:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.158 08:18:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.158 08:18:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.158 08:18:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.158 08:18:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.158 08:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.158 08:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.158 08:18:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:14.158 08:18:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:14.158 08:18:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:14.158 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:18:20.717 08:18:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:20.718 08:18:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:20.718 08:18:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:20.718 08:18:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:20.718 08:18:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:20.718 08:18:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:20.718 08:18:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:20.718 08:18:53 -- nvmf/common.sh@294 -- # net_devs=() 00:18:20.718 08:18:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:20.718 08:18:53 -- nvmf/common.sh@295 -- # e810=() 00:18:20.718 08:18:53 -- nvmf/common.sh@295 -- # local -ga e810 00:18:20.718 08:18:53 -- nvmf/common.sh@296 -- # x722=() 00:18:20.718 08:18:53 -- nvmf/common.sh@296 -- # local -ga x722 00:18:20.718 08:18:53 -- nvmf/common.sh@297 -- # mlx=() 00:18:20.718 08:18:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:20.718 08:18:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.718 08:18:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.718 08:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:20.718 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:20.718 08:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:20.718 08:18:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:20.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:20.718 08:18:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.718 08:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.718 08:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.718 08:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:20.718 Found net devices under 0000:af:00.0: cvl_0_0 00:18:20.718 08:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:20.718 08:18:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.718 08:18:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.718 08:18:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:20.718 Found net devices under 0000:af:00.1: cvl_0_1 00:18:20.718 08:18:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:20.718 08:18:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:20.718 08:18:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.718 08:18:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.718 08:18:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:20.718 08:18:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.718 08:18:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.718 08:18:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:20.718 08:18:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.718 08:18:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.718 08:18:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:20.718 08:18:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:20.718 08:18:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.718 08:18:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.718 08:18:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.718 08:18:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.718 08:18:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:20.718 08:18:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.718 08:18:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.718 08:18:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.718 08:18:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:20.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:18:20.718 00:18:20.718 --- 10.0.0.2 ping statistics --- 00:18:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.718 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:20.718 08:18:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:18:20.718 00:18:20.718 --- 10.0.0.1 ping statistics --- 00:18:20.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.718 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:18:20.718 08:18:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.718 08:18:53 -- nvmf/common.sh@410 -- # return 0 00:18:20.718 08:18:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:20.718 08:18:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.718 08:18:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:20.718 08:18:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.718 08:18:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:20.718 08:18:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:20.718 08:18:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:20.718 08:18:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:20.718 08:18:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.718 08:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 08:18:53 -- nvmf/common.sh@469 -- # nvmfpid=2277796 00:18:20.718 08:18:53 -- nvmf/common.sh@470 -- # waitforlisten 2277796 00:18:20.718 08:18:53 -- common/autotest_common.sh@817 -- # '[' -z 2277796 ']' 00:18:20.718 08:18:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.718 08:18:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.718 08:18:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.718 08:18:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.718 08:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 08:18:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:20.718 [2024-02-13 08:18:53.637750] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:20.718 [2024-02-13 08:18:53.637794] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:20.718 [2024-02-13 08:18:53.705743] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.718 [2024-02-13 08:18:53.786587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:20.718 [2024-02-13 08:18:53.786692] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.718 [2024-02-13 08:18:53.786701] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.718 [2024-02-13 08:18:53.786709] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.718 [2024-02-13 08:18:53.786816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.718 [2024-02-13 08:18:53.786922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.718 [2024-02-13 08:18:53.787030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.719 [2024-02-13 08:18:53.787031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:20.982 08:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.982 08:18:54 -- common/autotest_common.sh@850 -- # return 0 00:18:20.982 08:18:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.982 08:18:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 08:18:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.982 08:18:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.982 08:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 [2024-02-13 08:18:54.457553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.982 08:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.982 08:18:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:20.982 08:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 Malloc0 00:18:20.982 08:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.982 08:18:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:20.982 08:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 08:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.982 08:18:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.982 08:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 08:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.982 08:18:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.982 08:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.982 08:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.982 [2024-02-13 08:18:54.493811] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.982 08:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.982 08:18:54 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:20.982 08:18:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:20.982 08:18:54 -- nvmf/common.sh@520 -- # config=() 00:18:20.982 08:18:54 -- nvmf/common.sh@520 -- # local subsystem config 00:18:20.982 08:18:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:20.982 08:18:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:20.982 { 00:18:20.982 "params": { 00:18:20.982 "name": "Nvme$subsystem", 00:18:20.982 "trtype": "$TEST_TRANSPORT", 00:18:20.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.982 "adrfam": "ipv4", 00:18:20.982 "trsvcid": "$NVMF_PORT", 00:18:20.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.982 "hdgst": ${hdgst:-false}, 00:18:20.982 "ddgst": ${ddgst:-false} 00:18:20.982 }, 00:18:20.982 "method": "bdev_nvme_attach_controller" 00:18:20.982 } 00:18:20.982 EOF 00:18:20.982 )") 00:18:20.982 08:18:54 -- nvmf/common.sh@542 -- # cat 00:18:20.982 08:18:54 -- nvmf/common.sh@544 -- # jq . 00:18:20.982 08:18:54 -- nvmf/common.sh@545 -- # IFS=, 00:18:20.982 08:18:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:20.982 "params": { 00:18:20.982 "name": "Nvme1", 00:18:20.982 "trtype": "tcp", 00:18:20.982 "traddr": "10.0.0.2", 00:18:20.982 "adrfam": "ipv4", 00:18:20.982 "trsvcid": "4420", 00:18:20.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.982 "hdgst": false, 00:18:20.982 "ddgst": false 00:18:20.982 }, 00:18:20.982 "method": "bdev_nvme_attach_controller" 00:18:20.982 }' 00:18:20.982 [2024-02-13 08:18:54.539852] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:20.982 [2024-02-13 08:18:54.539895] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2278017 ] 00:18:20.982 [2024-02-13 08:18:54.604133] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.244 [2024-02-13 08:18:54.687599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.244 [2024-02-13 08:18:54.687695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.244 [2024-02-13 08:18:54.687698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.245 [2024-02-13 08:18:54.687772] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:18:21.501 [2024-02-13 08:18:54.981696] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:21.501 [2024-02-13 08:18:54.981727] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:21.501 I/O targets: 00:18:21.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:21.501 00:18:21.501 00:18:21.501 CUnit - A unit testing framework for C - Version 2.1-3 00:18:21.501 http://cunit.sourceforge.net/ 00:18:21.501 00:18:21.501 00:18:21.501 Suite: bdevio tests on: Nvme1n1 00:18:21.501 Test: blockdev write read block ...passed 00:18:21.501 Test: blockdev write zeroes read block ...passed 00:18:21.501 Test: blockdev write zeroes read no split ...passed 00:18:21.501 Test: blockdev write zeroes read split ...passed 00:18:21.758 Test: blockdev write zeroes read split partial ...passed 00:18:21.758 Test: blockdev reset ...[2024-02-13 08:18:55.210028] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:21.758 [2024-02-13 08:18:55.210081] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfb680 (9): Bad file descriptor 00:18:21.758 [2024-02-13 08:18:55.320889] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:21.758 passed 00:18:21.758 Test: blockdev write read 8 blocks ...passed 00:18:21.758 Test: blockdev write read size > 128k ...passed 00:18:21.758 Test: blockdev write read invalid size ...passed 00:18:21.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:21.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:21.758 Test: blockdev write read max offset ...passed 00:18:22.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:22.015 Test: blockdev writev readv 8 blocks ...passed 00:18:22.015 Test: blockdev writev readv 30 x 1block ...passed 00:18:22.015 Test: blockdev writev readv block ...passed 00:18:22.015 Test: blockdev writev readv size > 128k ...passed 00:18:22.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:22.015 Test: blockdev comparev and writev ...[2024-02-13 08:18:55.622671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.622698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.622712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.622719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:22.015 [2024-02-13 08:18:55.623893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:22.015 [2024-02-13 08:18:55.623900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:22.015 passed 00:18:22.273 Test: blockdev nvme passthru rw ...passed 00:18:22.273 Test: blockdev nvme passthru vendor specific ...[2024-02-13 08:18:55.706668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.273 [2024-02-13 08:18:55.706718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:22.273 [2024-02-13 08:18:55.706971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.273 [2024-02-13 08:18:55.706983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:22.273 [2024-02-13 08:18:55.707221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.273 [2024-02-13 08:18:55.707232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:22.273 [2024-02-13 08:18:55.707471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:22.273 [2024-02-13 08:18:55.707482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:22.273 passed 00:18:22.273 Test: blockdev nvme admin passthru ...passed 00:18:22.273 Test: blockdev copy ...passed 00:18:22.273 00:18:22.273 Run Summary: Type Total Ran Passed Failed Inactive 00:18:22.273 suites 1 1 n/a 0 0 00:18:22.273 tests 23 23 23 0 0 00:18:22.273 asserts 152 152 152 0 n/a 00:18:22.273 00:18:22.273 Elapsed time = 1.561 seconds 00:18:22.273 [2024-02-13 08:18:55.763529] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:18:22.529 08:18:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.529 08:18:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.529 08:18:56 -- common/autotest_common.sh@10 -- # set +x 00:18:22.529 08:18:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.529 08:18:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:22.529 08:18:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:22.529 08:18:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.529 08:18:56 -- nvmf/common.sh@116 -- # sync 00:18:22.529 08:18:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.529 08:18:56 -- nvmf/common.sh@119 -- # set +e 00:18:22.529 08:18:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.529 08:18:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.529 rmmod nvme_tcp 00:18:22.529 rmmod nvme_fabrics 00:18:22.529 rmmod nvme_keyring 00:18:22.529 08:18:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.529 08:18:56 -- nvmf/common.sh@123 -- # set -e 00:18:22.529 08:18:56 -- nvmf/common.sh@124 -- # return 0 00:18:22.529 08:18:56 -- nvmf/common.sh@477 -- # '[' -n 2277796 ']' 00:18:22.529 08:18:56 -- nvmf/common.sh@478 -- # killprocess 2277796 00:18:22.529 08:18:56 -- common/autotest_common.sh@924 -- # '[' -z 2277796 ']' 00:18:22.529 08:18:56 -- common/autotest_common.sh@928 -- # kill -0 2277796 00:18:22.529 08:18:56 -- common/autotest_common.sh@929 -- # uname 00:18:22.529 08:18:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:22.529 08:18:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2277796 00:18:22.529 08:18:56 -- common/autotest_common.sh@930 -- # process_name=reactor_3 00:18:22.529 08:18:56 -- common/autotest_common.sh@934 -- # '[' reactor_3 = sudo ']' 00:18:22.529 08:18:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2277796' 00:18:22.529 killing process with pid 2277796 00:18:22.529 08:18:56 -- common/autotest_common.sh@943 -- # kill 2277796 00:18:22.529 08:18:56 -- common/autotest_common.sh@948 -- # wait 2277796 00:18:23.094 08:18:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:23.094 08:18:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:23.094 08:18:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:23.094 08:18:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.094 08:18:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:23.094 08:18:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.094 08:18:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.094 08:18:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.999 08:18:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:24.999 00:18:24.999 real 0m11.146s 00:18:24.999 user 0m15.009s 00:18:24.999 sys 0m5.464s 00:18:24.999 08:18:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:24.999 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.999 ************************************ 00:18:24.999 END TEST nvmf_bdevio_no_huge 00:18:24.999 ************************************ 00:18:24.999 08:18:58 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.999 08:18:58 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:18:24.999 08:18:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:24.999 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:18:24.999 ************************************ 00:18:24.999 START TEST nvmf_tls 00:18:24.999 ************************************ 00:18:24.999 08:18:58 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.999 * Looking for test storage... 00:18:25.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.257 08:18:58 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.257 08:18:58 -- nvmf/common.sh@7 -- # uname -s 00:18:25.257 08:18:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.257 08:18:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.257 08:18:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.257 08:18:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.257 08:18:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.257 08:18:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.257 08:18:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.257 08:18:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.257 08:18:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.257 08:18:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.257 08:18:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:25.257 08:18:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:25.257 08:18:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.257 08:18:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.257 08:18:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.257 08:18:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.257 08:18:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.257 08:18:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.257 08:18:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.257 08:18:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.257 08:18:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.257 08:18:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.257 08:18:58 -- paths/export.sh@5 -- # export PATH 00:18:25.257 08:18:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.257 08:18:58 -- nvmf/common.sh@46 -- # : 0 00:18:25.257 08:18:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.257 08:18:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.257 08:18:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.257 08:18:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.257 08:18:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.257 08:18:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.257 08:18:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.257 08:18:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.257 08:18:58 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.257 08:18:58 -- target/tls.sh@71 -- # nvmftestinit 00:18:25.258 08:18:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.258 08:18:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.258 08:18:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.258 08:18:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.258 08:18:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.258 08:18:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.258 08:18:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.258 08:18:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.258 08:18:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.258 08:18:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.258 08:18:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.258 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:18:31.816 08:19:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.816 08:19:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:31.816 08:19:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:31.816 08:19:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:31.816 08:19:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:31.816 08:19:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:31.816 08:19:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:31.816 08:19:04 -- nvmf/common.sh@294 -- # net_devs=() 00:18:31.816 08:19:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:31.816 08:19:04 -- nvmf/common.sh@295 -- # e810=() 00:18:31.816 08:19:04 -- nvmf/common.sh@295 -- # local -ga e810 00:18:31.816 08:19:04 -- nvmf/common.sh@296 -- # x722=() 00:18:31.816 08:19:04 -- nvmf/common.sh@296 -- # local -ga x722 00:18:31.816 08:19:04 -- nvmf/common.sh@297 -- # mlx=() 00:18:31.816 08:19:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:31.816 08:19:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.816 08:19:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:31.816 08:19:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:31.816 08:19:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:31.816 08:19:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.816 08:19:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:31.816 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:31.816 08:19:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.816 08:19:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:31.816 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:31.816 08:19:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:31.816 08:19:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:31.816 08:19:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.816 08:19:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.816 08:19:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.816 08:19:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.816 08:19:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:31.816 Found net devices under 0000:af:00.0: cvl_0_0 00:18:31.816 08:19:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.816 08:19:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.817 08:19:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.817 08:19:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.817 08:19:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.817 08:19:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:31.817 Found net devices under 0000:af:00.1: cvl_0_1 00:18:31.817 08:19:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.817 08:19:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:31.817 08:19:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:31.817 08:19:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:31.817 08:19:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:31.817 08:19:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:31.817 08:19:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.817 08:19:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.817 08:19:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.817 08:19:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:31.817 08:19:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.817 08:19:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.817 08:19:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:31.817 08:19:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.817 08:19:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.817 08:19:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:31.817 08:19:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:31.817 08:19:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.817 08:19:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.817 08:19:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.817 08:19:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.817 08:19:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:31.817 08:19:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.817 08:19:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.817 08:19:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.817 08:19:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:31.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:18:31.817 00:18:31.817 --- 10.0.0.2 ping statistics --- 00:18:31.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.817 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:31.817 08:19:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:18:31.817 00:18:31.817 --- 10.0.0.1 ping statistics --- 00:18:31.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.817 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:31.817 08:19:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.817 08:19:04 -- nvmf/common.sh@410 -- # return 0 00:18:31.817 08:19:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:31.817 08:19:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.817 08:19:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:31.817 08:19:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:31.817 08:19:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.817 08:19:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:31.817 08:19:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:31.817 08:19:04 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:31.817 08:19:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:31.817 08:19:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:31.817 08:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.817 08:19:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:31.817 08:19:04 -- nvmf/common.sh@469 -- # nvmfpid=2282170 00:18:31.817 08:19:04 -- nvmf/common.sh@470 -- # waitforlisten 2282170 00:18:31.817 08:19:04 -- common/autotest_common.sh@817 -- # '[' -z 2282170 ']' 00:18:31.817 08:19:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.817 08:19:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.817 08:19:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.817 08:19:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.817 08:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.817 [2024-02-13 08:19:04.997528] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:31.817 [2024-02-13 08:19:04.997570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.817 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.817 [2024-02-13 08:19:05.060118] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.817 [2024-02-13 08:19:05.136327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:31.817 [2024-02-13 08:19:05.136435] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.817 [2024-02-13 08:19:05.136443] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.817 [2024-02-13 08:19:05.136453] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.817 [2024-02-13 08:19:05.136474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.382 08:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:32.382 08:19:05 -- common/autotest_common.sh@850 -- # return 0 00:18:32.382 08:19:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:32.382 08:19:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:32.382 08:19:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.382 08:19:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.382 08:19:05 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:18:32.382 08:19:05 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:32.382 true 00:18:32.382 08:19:06 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.382 08:19:06 -- target/tls.sh@82 -- # jq -r .tls_version 00:18:32.638 08:19:06 -- target/tls.sh@82 -- # version=0 00:18:32.638 08:19:06 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:18:32.638 08:19:06 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:32.894 08:19:06 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.894 08:19:06 -- target/tls.sh@90 -- # jq -r .tls_version 00:18:32.894 08:19:06 -- target/tls.sh@90 -- # version=13 00:18:32.895 08:19:06 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:18:32.895 08:19:06 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:33.184 08:19:06 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.184 08:19:06 -- target/tls.sh@98 -- # jq -r .tls_version 00:18:33.184 08:19:06 -- target/tls.sh@98 -- # version=7 00:18:33.184 08:19:06 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:18:33.184 08:19:06 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.184 08:19:06 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:33.442 08:19:06 -- target/tls.sh@105 -- # ktls=false 00:18:33.442 08:19:06 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:18:33.442 08:19:06 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:33.699 08:19:07 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.699 08:19:07 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:33.699 08:19:07 -- target/tls.sh@113 -- # ktls=true 00:18:33.699 08:19:07 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:18:33.699 08:19:07 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:33.957 08:19:07 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.957 08:19:07 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:18:33.957 08:19:07 -- target/tls.sh@121 -- # ktls=false 00:18:33.957 08:19:07 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:18:33.957 08:19:07 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:18:33.957 08:19:07 -- target/tls.sh@49 -- # local key hash crc 00:18:33.957 08:19:07 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:18:33.957 08:19:07 -- target/tls.sh@51 -- # hash=01 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # gzip -1 -c 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # tail -c8 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # head -c 4 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # crc='p$H�' 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.214 08:19:07 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.214 08:19:07 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:18:34.214 08:19:07 -- target/tls.sh@49 -- # local key hash crc 00:18:34.214 08:19:07 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:18:34.214 08:19:07 -- target/tls.sh@51 -- # hash=01 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # tail -c8 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # gzip -1 -c 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # head -c 4 00:18:34.214 08:19:07 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:18:34.214 08:19:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.215 08:19:07 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.215 08:19:07 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:34.215 08:19:07 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:34.215 08:19:07 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.215 08:19:07 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.215 08:19:07 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:34.215 08:19:07 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:34.215 08:19:07 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:34.215 08:19:07 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:34.472 08:19:08 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:34.472 08:19:08 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:34.472 08:19:08 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.730 [2024-02-13 08:19:08.237221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.730 08:19:08 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.730 08:19:08 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.987 [2024-02-13 08:19:08.538002] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.987 [2024-02-13 08:19:08.538190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.987 08:19:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.245 malloc0 00:18:35.245 08:19:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.245 08:19:08 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:35.502 08:19:09 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:35.502 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.686 Initializing NVMe Controllers 00:18:47.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:47.686 Initialization complete. Launching workers. 00:18:47.686 ======================================================== 00:18:47.686 Latency(us) 00:18:47.686 Device Information : IOPS MiB/s Average min max 00:18:47.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17537.87 68.51 3649.64 782.00 5380.47 00:18:47.686 ======================================================== 00:18:47.686 Total : 17537.87 68.51 3649.64 782.00 5380.47 00:18:47.686 00:18:47.686 08:19:19 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:47.686 08:19:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.686 08:19:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.686 08:19:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.686 08:19:19 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:47.686 08:19:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.686 08:19:19 -- target/tls.sh@28 -- # bdevperf_pid=2284630 00:18:47.686 08:19:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.686 08:19:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.686 08:19:19 -- target/tls.sh@31 -- # waitforlisten 2284630 /var/tmp/bdevperf.sock 00:18:47.686 08:19:19 -- common/autotest_common.sh@817 -- # '[' -z 2284630 ']' 00:18:47.686 08:19:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.686 08:19:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.686 08:19:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.686 08:19:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.686 08:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:47.686 [2024-02-13 08:19:19.195750] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:47.686 [2024-02-13 08:19:19.195798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284630 ] 00:18:47.686 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.686 [2024-02-13 08:19:19.250828] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.686 [2024-02-13 08:19:19.318309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.686 08:19:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.686 08:19:19 -- common/autotest_common.sh@850 -- # return 0 00:18:47.686 08:19:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:47.686 [2024-02-13 08:19:20.146888] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.686 TLSTESTn1 00:18:47.686 08:19:20 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:47.686 Running I/O for 10 seconds... 00:18:57.664 00:18:57.664 Latency(us) 00:18:57.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.664 Verification LBA range: start 0x0 length 0x2000 00:18:57.664 TLSTESTn1 : 10.03 2542.02 9.93 0.00 0.00 50286.02 9986.44 77394.90 00:18:57.664 =================================================================================================================== 00:18:57.664 Total : 2542.02 9.93 0.00 0.00 50286.02 9986.44 77394.90 00:18:57.664 0 00:18:57.664 08:19:30 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.664 08:19:30 -- target/tls.sh@45 -- # killprocess 2284630 00:18:57.664 08:19:30 -- common/autotest_common.sh@924 -- # '[' -z 2284630 ']' 00:18:57.664 08:19:30 -- common/autotest_common.sh@928 -- # kill -0 2284630 00:18:57.664 08:19:30 -- common/autotest_common.sh@929 -- # uname 00:18:57.664 08:19:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:57.664 08:19:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2284630 00:18:57.664 08:19:30 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:18:57.664 08:19:30 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:18:57.664 08:19:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2284630' 00:18:57.664 killing process with pid 2284630 00:18:57.664 08:19:30 -- common/autotest_common.sh@943 -- # kill 2284630 00:18:57.664 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.664 00:18:57.664 Latency(us) 00:18:57.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.664 =================================================================================================================== 00:18:57.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.664 08:19:30 -- common/autotest_common.sh@948 -- # wait 2284630 00:18:57.664 08:19:30 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:57.664 08:19:30 -- common/autotest_common.sh@638 -- # local es=0 00:18:57.664 08:19:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:57.664 08:19:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:57.664 08:19:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:57.664 08:19:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:57.664 08:19:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:57.664 08:19:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:57.664 08:19:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:57.664 08:19:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:57.664 08:19:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:57.664 08:19:30 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:18:57.664 08:19:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.664 08:19:30 -- target/tls.sh@28 -- # bdevperf_pid=2286478 00:18:57.664 08:19:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.664 08:19:30 -- target/tls.sh@31 -- # waitforlisten 2286478 /var/tmp/bdevperf.sock 00:18:57.664 08:19:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.664 08:19:30 -- common/autotest_common.sh@817 -- # '[' -z 2286478 ']' 00:18:57.664 08:19:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.664 08:19:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:57.664 08:19:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.664 08:19:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:57.664 08:19:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.664 [2024-02-13 08:19:30.690920] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:57.664 [2024-02-13 08:19:30.690969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286478 ] 00:18:57.664 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.664 [2024-02-13 08:19:30.745806] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.664 [2024-02-13 08:19:30.815013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.930 08:19:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:57.930 08:19:31 -- common/autotest_common.sh@850 -- # return 0 00:18:57.930 08:19:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:58.188 [2024-02-13 08:19:31.656538] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.188 [2024-02-13 08:19:31.668255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:58.188 [2024-02-13 08:19:31.668885] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6750 (107): Transport endpoint is not connected 00:18:58.188 [2024-02-13 08:19:31.669878] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6750 (9): Bad file descriptor 00:18:58.188 [2024-02-13 08:19:31.670879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.188 [2024-02-13 08:19:31.670891] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:58.188 [2024-02-13 08:19:31.670899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.188 request: 00:18:58.188 { 00:18:58.188 "name": "TLSTEST", 00:18:58.188 "trtype": "tcp", 00:18:58.188 "traddr": "10.0.0.2", 00:18:58.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.188 "adrfam": "ipv4", 00:18:58.188 "trsvcid": "4420", 00:18:58.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.188 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:18:58.188 "method": "bdev_nvme_attach_controller", 00:18:58.188 "req_id": 1 00:18:58.188 } 00:18:58.188 Got JSON-RPC error response 00:18:58.188 response: 00:18:58.188 { 00:18:58.188 "code": -32602, 00:18:58.188 "message": "Invalid parameters" 00:18:58.188 } 00:18:58.188 08:19:31 -- target/tls.sh@36 -- # killprocess 2286478 00:18:58.188 08:19:31 -- common/autotest_common.sh@924 -- # '[' -z 2286478 ']' 00:18:58.188 08:19:31 -- common/autotest_common.sh@928 -- # kill -0 2286478 00:18:58.188 08:19:31 -- common/autotest_common.sh@929 -- # uname 00:18:58.188 08:19:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:58.188 08:19:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2286478 00:18:58.188 08:19:31 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:18:58.188 08:19:31 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:18:58.188 08:19:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2286478' 00:18:58.188 killing process with pid 2286478 00:18:58.188 08:19:31 -- common/autotest_common.sh@943 -- # kill 2286478 00:18:58.188 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.188 00:18:58.188 Latency(us) 00:18:58.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.188 =================================================================================================================== 00:18:58.188 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:58.188 08:19:31 -- common/autotest_common.sh@948 -- # wait 2286478 00:18:58.447 08:19:31 -- target/tls.sh@37 -- # return 1 00:18:58.447 08:19:31 -- common/autotest_common.sh@641 -- # es=1 00:18:58.447 08:19:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:58.447 08:19:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:58.447 08:19:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:58.447 08:19:31 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:58.447 08:19:31 -- common/autotest_common.sh@638 -- # local es=0 00:18:58.447 08:19:31 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:58.447 08:19:31 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:58.447 08:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.447 08:19:31 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:58.447 08:19:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:58.447 08:19:31 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:58.447 08:19:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.447 08:19:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.447 08:19:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:58.447 08:19:31 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:58.447 08:19:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.447 08:19:31 -- target/tls.sh@28 -- # bdevperf_pid=2286722 00:18:58.447 08:19:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.447 08:19:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.447 08:19:31 -- target/tls.sh@31 -- # waitforlisten 2286722 /var/tmp/bdevperf.sock 00:18:58.447 08:19:31 -- common/autotest_common.sh@817 -- # '[' -z 2286722 ']' 00:18:58.447 08:19:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.447 08:19:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.447 08:19:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.447 08:19:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.447 08:19:31 -- common/autotest_common.sh@10 -- # set +x 00:18:58.447 [2024-02-13 08:19:31.971154] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:58.447 [2024-02-13 08:19:31.971200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286722 ] 00:18:58.447 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.447 [2024-02-13 08:19:32.025918] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.447 [2024-02-13 08:19:32.089126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.382 08:19:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.382 08:19:32 -- common/autotest_common.sh@850 -- # return 0 00:18:59.382 08:19:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:59.382 [2024-02-13 08:19:32.922254] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.382 [2024-02-13 08:19:32.931384] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.382 [2024-02-13 08:19:32.931408] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.382 [2024-02-13 08:19:32.931432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.382 [2024-02-13 08:19:32.932633] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7750 (107): Transport endpoint is not connected 00:18:59.382 [2024-02-13 08:19:32.933626] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7750 (9): Bad file descriptor 00:18:59.382 [2024-02-13 08:19:32.934628] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.382 [2024-02-13 08:19:32.934638] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.382 [2024-02-13 08:19:32.934650] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.382 request: 00:18:59.382 { 00:18:59.382 "name": "TLSTEST", 00:18:59.382 "trtype": "tcp", 00:18:59.382 "traddr": "10.0.0.2", 00:18:59.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:59.382 "adrfam": "ipv4", 00:18:59.382 "trsvcid": "4420", 00:18:59.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.382 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:18:59.382 "method": "bdev_nvme_attach_controller", 00:18:59.382 "req_id": 1 00:18:59.382 } 00:18:59.382 Got JSON-RPC error response 00:18:59.382 response: 00:18:59.382 { 00:18:59.382 "code": -32602, 00:18:59.382 "message": "Invalid parameters" 00:18:59.382 } 00:18:59.382 08:19:32 -- target/tls.sh@36 -- # killprocess 2286722 00:18:59.382 08:19:32 -- common/autotest_common.sh@924 -- # '[' -z 2286722 ']' 00:18:59.382 08:19:32 -- common/autotest_common.sh@928 -- # kill -0 2286722 00:18:59.382 08:19:32 -- common/autotest_common.sh@929 -- # uname 00:18:59.382 08:19:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:59.382 08:19:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2286722 00:18:59.382 08:19:32 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:18:59.382 08:19:32 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:18:59.382 08:19:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2286722' 00:18:59.382 killing process with pid 2286722 00:18:59.382 08:19:32 -- common/autotest_common.sh@943 -- # kill 2286722 00:18:59.382 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.382 00:18:59.382 Latency(us) 00:18:59.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.382 =================================================================================================================== 00:18:59.382 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.382 08:19:32 -- common/autotest_common.sh@948 -- # wait 2286722 00:18:59.640 08:19:33 -- target/tls.sh@37 -- # return 1 00:18:59.640 08:19:33 -- common/autotest_common.sh@641 -- # es=1 00:18:59.640 08:19:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:59.640 08:19:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:59.640 08:19:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:59.641 08:19:33 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:59.641 08:19:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:59.641 08:19:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:59.641 08:19:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:59.641 08:19:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:59.641 08:19:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:59.641 08:19:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:59.641 08:19:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:59.641 08:19:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.641 08:19:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:59.641 08:19:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.641 08:19:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:59.641 08:19:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.641 08:19:33 -- target/tls.sh@28 -- # bdevperf_pid=2286959 00:18:59.641 08:19:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.641 08:19:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.641 08:19:33 -- target/tls.sh@31 -- # waitforlisten 2286959 /var/tmp/bdevperf.sock 00:18:59.641 08:19:33 -- common/autotest_common.sh@817 -- # '[' -z 2286959 ']' 00:18:59.641 08:19:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.641 08:19:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:59.641 08:19:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.641 08:19:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:59.641 08:19:33 -- common/autotest_common.sh@10 -- # set +x 00:18:59.641 [2024-02-13 08:19:33.238493] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:59.641 [2024-02-13 08:19:33.238539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286959 ] 00:18:59.641 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.641 [2024-02-13 08:19:33.292993] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.899 [2024-02-13 08:19:33.357388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.464 08:19:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.464 08:19:34 -- common/autotest_common.sh@850 -- # return 0 00:19:00.464 08:19:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:00.722 [2024-02-13 08:19:34.181787] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.722 [2024-02-13 08:19:34.186328] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.722 [2024-02-13 08:19:34.186351] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:00.722 [2024-02-13 08:19:34.186375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:00.722 [2024-02-13 08:19:34.187022] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83750 (107): Transport endpoint is not connected 00:19:00.722 [2024-02-13 08:19:34.188012] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf83750 (9): Bad file descriptor 00:19:00.722 [2024-02-13 08:19:34.189013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:00.722 [2024-02-13 08:19:34.189024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:00.722 [2024-02-13 08:19:34.189033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:00.722 request: 00:19:00.722 { 00:19:00.722 "name": "TLSTEST", 00:19:00.722 "trtype": "tcp", 00:19:00.722 "traddr": "10.0.0.2", 00:19:00.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.722 "adrfam": "ipv4", 00:19:00.722 "trsvcid": "4420", 00:19:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:00.722 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:00.722 "method": "bdev_nvme_attach_controller", 00:19:00.722 "req_id": 1 00:19:00.722 } 00:19:00.722 Got JSON-RPC error response 00:19:00.722 response: 00:19:00.722 { 00:19:00.722 "code": -32602, 00:19:00.722 "message": "Invalid parameters" 00:19:00.722 } 00:19:00.722 08:19:34 -- target/tls.sh@36 -- # killprocess 2286959 00:19:00.722 08:19:34 -- common/autotest_common.sh@924 -- # '[' -z 2286959 ']' 00:19:00.722 08:19:34 -- common/autotest_common.sh@928 -- # kill -0 2286959 00:19:00.722 08:19:34 -- common/autotest_common.sh@929 -- # uname 00:19:00.722 08:19:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:00.722 08:19:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2286959 00:19:00.722 08:19:34 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:00.722 08:19:34 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:00.722 08:19:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2286959' 00:19:00.722 killing process with pid 2286959 00:19:00.722 08:19:34 -- common/autotest_common.sh@943 -- # kill 2286959 00:19:00.722 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.722 00:19:00.722 Latency(us) 00:19:00.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.722 =================================================================================================================== 00:19:00.722 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.722 08:19:34 -- common/autotest_common.sh@948 -- # wait 2286959 00:19:00.981 08:19:34 -- target/tls.sh@37 -- # return 1 00:19:00.981 08:19:34 -- common/autotest_common.sh@641 -- # es=1 00:19:00.981 08:19:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:00.981 08:19:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:00.981 08:19:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:00.981 08:19:34 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.981 08:19:34 -- common/autotest_common.sh@638 -- # local es=0 00:19:00.981 08:19:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.981 08:19:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:00.981 08:19:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:00.981 08:19:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:00.981 08:19:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:00.981 08:19:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:00.981 08:19:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.981 08:19:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.981 08:19:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.981 08:19:34 -- target/tls.sh@23 -- # psk= 00:19:00.981 08:19:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.981 08:19:34 -- target/tls.sh@28 -- # bdevperf_pid=2287171 00:19:00.981 08:19:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.981 08:19:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.981 08:19:34 -- target/tls.sh@31 -- # waitforlisten 2287171 /var/tmp/bdevperf.sock 00:19:00.981 08:19:34 -- common/autotest_common.sh@817 -- # '[' -z 2287171 ']' 00:19:00.981 08:19:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.981 08:19:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.981 08:19:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.981 08:19:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.981 08:19:34 -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 [2024-02-13 08:19:34.490896] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:00.981 [2024-02-13 08:19:34.490946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287171 ] 00:19:00.981 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.981 [2024-02-13 08:19:34.546554] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.981 [2024-02-13 08:19:34.611404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.915 08:19:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.915 08:19:35 -- common/autotest_common.sh@850 -- # return 0 00:19:01.915 08:19:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:01.915 [2024-02-13 08:19:35.427800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.915 [2024-02-13 08:19:35.429661] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228c700 (9): Bad file descriptor 00:19:01.915 [2024-02-13 08:19:35.430660] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.915 [2024-02-13 08:19:35.430670] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.915 [2024-02-13 08:19:35.430678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.915 request: 00:19:01.915 { 00:19:01.915 "name": "TLSTEST", 00:19:01.915 "trtype": "tcp", 00:19:01.915 "traddr": "10.0.0.2", 00:19:01.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.915 "adrfam": "ipv4", 00:19:01.915 "trsvcid": "4420", 00:19:01.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.915 "method": "bdev_nvme_attach_controller", 00:19:01.915 "req_id": 1 00:19:01.915 } 00:19:01.915 Got JSON-RPC error response 00:19:01.915 response: 00:19:01.915 { 00:19:01.915 "code": -32602, 00:19:01.915 "message": "Invalid parameters" 00:19:01.915 } 00:19:01.915 08:19:35 -- target/tls.sh@36 -- # killprocess 2287171 00:19:01.915 08:19:35 -- common/autotest_common.sh@924 -- # '[' -z 2287171 ']' 00:19:01.915 08:19:35 -- common/autotest_common.sh@928 -- # kill -0 2287171 00:19:01.915 08:19:35 -- common/autotest_common.sh@929 -- # uname 00:19:01.915 08:19:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:01.915 08:19:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2287171 00:19:01.915 08:19:35 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:01.915 08:19:35 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:01.915 08:19:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2287171' 00:19:01.915 killing process with pid 2287171 00:19:01.915 08:19:35 -- common/autotest_common.sh@943 -- # kill 2287171 00:19:01.915 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.915 00:19:01.915 Latency(us) 00:19:01.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.915 =================================================================================================================== 00:19:01.915 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.915 08:19:35 -- common/autotest_common.sh@948 -- # wait 2287171 00:19:02.173 08:19:35 -- target/tls.sh@37 -- # return 1 00:19:02.173 08:19:35 -- common/autotest_common.sh@641 -- # es=1 00:19:02.173 08:19:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:02.173 08:19:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:02.173 08:19:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:02.173 08:19:35 -- target/tls.sh@167 -- # killprocess 2282170 00:19:02.173 08:19:35 -- common/autotest_common.sh@924 -- # '[' -z 2282170 ']' 00:19:02.173 08:19:35 -- common/autotest_common.sh@928 -- # kill -0 2282170 00:19:02.173 08:19:35 -- common/autotest_common.sh@929 -- # uname 00:19:02.173 08:19:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:02.173 08:19:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2282170 00:19:02.173 08:19:35 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:02.173 08:19:35 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:02.173 08:19:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2282170' 00:19:02.173 killing process with pid 2282170 00:19:02.173 08:19:35 -- common/autotest_common.sh@943 -- # kill 2282170 00:19:02.173 08:19:35 -- common/autotest_common.sh@948 -- # wait 2282170 00:19:02.431 08:19:35 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:19:02.431 08:19:35 -- target/tls.sh@49 -- # local key hash crc 00:19:02.431 08:19:35 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:02.431 08:19:35 -- target/tls.sh@51 -- # hash=02 00:19:02.431 08:19:35 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:19:02.431 08:19:35 -- target/tls.sh@52 -- # tail -c8 00:19:02.431 08:19:35 -- target/tls.sh@52 -- # gzip -1 -c 00:19:02.431 08:19:35 -- target/tls.sh@52 -- # head -c 4 00:19:02.431 08:19:35 -- target/tls.sh@52 -- # crc='�e�'\''' 00:19:02.431 08:19:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:02.431 08:19:35 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:19:02.431 08:19:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:02.431 08:19:35 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:02.431 08:19:35 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:02.432 08:19:35 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:02.432 08:19:35 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:02.432 08:19:35 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:19:02.432 08:19:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:02.432 08:19:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:02.432 08:19:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.432 08:19:35 -- nvmf/common.sh@469 -- # nvmfpid=2287456 00:19:02.432 08:19:35 -- nvmf/common.sh@470 -- # waitforlisten 2287456 00:19:02.432 08:19:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.432 08:19:35 -- common/autotest_common.sh@817 -- # '[' -z 2287456 ']' 00:19:02.432 08:19:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.432 08:19:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:02.432 08:19:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.432 08:19:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:02.432 08:19:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.432 [2024-02-13 08:19:36.022970] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:02.432 [2024-02-13 08:19:36.023015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.432 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.432 [2024-02-13 08:19:36.085748] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.690 [2024-02-13 08:19:36.154208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:02.690 [2024-02-13 08:19:36.154313] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.690 [2024-02-13 08:19:36.154320] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.690 [2024-02-13 08:19:36.154327] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.690 [2024-02-13 08:19:36.154342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.256 08:19:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:03.256 08:19:36 -- common/autotest_common.sh@850 -- # return 0 00:19:03.256 08:19:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:03.256 08:19:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:03.256 08:19:36 -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 08:19:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.256 08:19:36 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.256 08:19:36 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.256 08:19:36 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.514 [2024-02-13 08:19:36.993203] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.514 08:19:37 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.514 08:19:37 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.772 [2024-02-13 08:19:37.322054] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.772 [2024-02-13 08:19:37.322249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.772 08:19:37 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:04.030 malloc0 00:19:04.030 08:19:37 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:04.030 08:19:37 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:04.289 08:19:37 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:04.289 08:19:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.289 08:19:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.289 08:19:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.289 08:19:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:04.289 08:19:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.289 08:19:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.289 08:19:37 -- target/tls.sh@28 -- # bdevperf_pid=2287720 00:19:04.289 08:19:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.289 08:19:37 -- target/tls.sh@31 -- # waitforlisten 2287720 /var/tmp/bdevperf.sock 00:19:04.289 08:19:37 -- common/autotest_common.sh@817 -- # '[' -z 2287720 ']' 00:19:04.289 08:19:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.289 08:19:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.289 08:19:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.289 08:19:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.289 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:19:04.289 [2024-02-13 08:19:37.873320] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:04.289 [2024-02-13 08:19:37.873362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287720 ] 00:19:04.289 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.289 [2024-02-13 08:19:37.927179] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.546 [2024-02-13 08:19:37.997029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.112 08:19:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.112 08:19:38 -- common/autotest_common.sh@850 -- # return 0 00:19:05.112 08:19:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:05.370 [2024-02-13 08:19:38.810582] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.370 TLSTESTn1 00:19:05.370 08:19:38 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:05.370 Running I/O for 10 seconds... 00:19:17.573 00:19:17.573 Latency(us) 00:19:17.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.573 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.573 Verification LBA range: start 0x0 length 0x2000 00:19:17.573 TLSTESTn1 : 10.04 2437.21 9.52 0.00 0.00 52443.53 8550.89 84884.72 00:19:17.573 =================================================================================================================== 00:19:17.573 Total : 2437.21 9.52 0.00 0.00 52443.53 8550.89 84884.72 00:19:17.573 0 00:19:17.573 08:19:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.573 08:19:49 -- target/tls.sh@45 -- # killprocess 2287720 00:19:17.573 08:19:49 -- common/autotest_common.sh@924 -- # '[' -z 2287720 ']' 00:19:17.573 08:19:49 -- common/autotest_common.sh@928 -- # kill -0 2287720 00:19:17.573 08:19:49 -- common/autotest_common.sh@929 -- # uname 00:19:17.573 08:19:49 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:17.573 08:19:49 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2287720 00:19:17.573 08:19:49 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:17.573 08:19:49 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:17.573 08:19:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2287720' 00:19:17.573 killing process with pid 2287720 00:19:17.573 08:19:49 -- common/autotest_common.sh@943 -- # kill 2287720 00:19:17.573 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.573 00:19:17.573 Latency(us) 00:19:17.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.573 =================================================================================================================== 00:19:17.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.573 08:19:49 -- common/autotest_common.sh@948 -- # wait 2287720 00:19:17.573 08:19:49 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:17.573 08:19:49 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:17.573 08:19:49 -- common/autotest_common.sh@638 -- # local es=0 00:19:17.573 08:19:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:17.573 08:19:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:17.573 08:19:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:17.573 08:19:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:17.573 08:19:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:17.573 08:19:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:17.573 08:19:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.573 08:19:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.573 08:19:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.573 08:19:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:17.573 08:19:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.573 08:19:49 -- target/tls.sh@28 -- # bdevperf_pid=2289566 00:19:17.573 08:19:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.573 08:19:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.573 08:19:49 -- target/tls.sh@31 -- # waitforlisten 2289566 /var/tmp/bdevperf.sock 00:19:17.573 08:19:49 -- common/autotest_common.sh@817 -- # '[' -z 2289566 ']' 00:19:17.573 08:19:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.573 08:19:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.573 08:19:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.573 08:19:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.573 08:19:49 -- common/autotest_common.sh@10 -- # set +x 00:19:17.573 [2024-02-13 08:19:49.348149] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:17.573 [2024-02-13 08:19:49.348194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289566 ] 00:19:17.573 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.573 [2024-02-13 08:19:49.401783] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.573 [2024-02-13 08:19:49.465266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.573 08:19:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.573 08:19:50 -- common/autotest_common.sh@850 -- # return 0 00:19:17.573 08:19:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:17.573 [2024-02-13 08:19:50.282377] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.573 [2024-02-13 08:19:50.282422] bdev_nvme_rpc.c: 337:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:17.573 request: 00:19:17.573 { 00:19:17.573 "name": "TLSTEST", 00:19:17.573 "trtype": "tcp", 00:19:17.573 "traddr": "10.0.0.2", 00:19:17.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.573 "adrfam": "ipv4", 00:19:17.573 "trsvcid": "4420", 00:19:17.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.573 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:17.573 "method": "bdev_nvme_attach_controller", 00:19:17.573 "req_id": 1 00:19:17.573 } 00:19:17.573 Got JSON-RPC error response 00:19:17.573 response: 00:19:17.573 { 00:19:17.573 "code": -22, 00:19:17.573 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:17.573 } 00:19:17.573 08:19:50 -- target/tls.sh@36 -- # killprocess 2289566 00:19:17.573 08:19:50 -- common/autotest_common.sh@924 -- # '[' -z 2289566 ']' 00:19:17.573 08:19:50 -- common/autotest_common.sh@928 -- # kill -0 2289566 00:19:17.573 08:19:50 -- common/autotest_common.sh@929 -- # uname 00:19:17.573 08:19:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:17.573 08:19:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2289566 00:19:17.573 08:19:50 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:17.573 08:19:50 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:17.573 08:19:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2289566' 00:19:17.573 killing process with pid 2289566 00:19:17.573 08:19:50 -- common/autotest_common.sh@943 -- # kill 2289566 00:19:17.573 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.573 00:19:17.573 Latency(us) 00:19:17.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.573 =================================================================================================================== 00:19:17.573 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.573 08:19:50 -- common/autotest_common.sh@948 -- # wait 2289566 00:19:17.573 08:19:50 -- target/tls.sh@37 -- # return 1 00:19:17.573 08:19:50 -- common/autotest_common.sh@641 -- # es=1 00:19:17.573 08:19:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:17.573 08:19:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:17.573 08:19:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:17.573 08:19:50 -- target/tls.sh@183 -- # killprocess 2287456 00:19:17.573 08:19:50 -- common/autotest_common.sh@924 -- # '[' -z 2287456 ']' 00:19:17.573 08:19:50 -- common/autotest_common.sh@928 -- # kill -0 2287456 00:19:17.573 08:19:50 -- common/autotest_common.sh@929 -- # uname 00:19:17.573 08:19:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:17.574 08:19:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2287456 00:19:17.574 08:19:50 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:17.574 08:19:50 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:17.574 08:19:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2287456' 00:19:17.574 killing process with pid 2287456 00:19:17.574 08:19:50 -- common/autotest_common.sh@943 -- # kill 2287456 00:19:17.574 08:19:50 -- common/autotest_common.sh@948 -- # wait 2287456 00:19:17.574 08:19:50 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:17.574 08:19:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.574 08:19:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.574 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.574 08:19:50 -- nvmf/common.sh@469 -- # nvmfpid=2289815 00:19:17.574 08:19:50 -- nvmf/common.sh@470 -- # waitforlisten 2289815 00:19:17.574 08:19:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.574 08:19:50 -- common/autotest_common.sh@817 -- # '[' -z 2289815 ']' 00:19:17.574 08:19:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.574 08:19:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.574 08:19:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.574 08:19:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.574 08:19:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.574 [2024-02-13 08:19:50.830521] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:17.574 [2024-02-13 08:19:50.830566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.574 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.574 [2024-02-13 08:19:50.894665] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.574 [2024-02-13 08:19:50.960306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:17.574 [2024-02-13 08:19:50.960416] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.574 [2024-02-13 08:19:50.960425] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.574 [2024-02-13 08:19:50.960431] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.574 [2024-02-13 08:19:50.960448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.140 08:19:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.140 08:19:51 -- common/autotest_common.sh@850 -- # return 0 00:19:18.140 08:19:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:18.140 08:19:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.140 08:19:51 -- common/autotest_common.sh@10 -- # set +x 00:19:18.140 08:19:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.140 08:19:51 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:18.140 08:19:51 -- common/autotest_common.sh@638 -- # local es=0 00:19:18.140 08:19:51 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:18.140 08:19:51 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:19:18.140 08:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.140 08:19:51 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:19:18.140 08:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.140 08:19:51 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:18.140 08:19:51 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:18.140 08:19:51 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.140 [2024-02-13 08:19:51.826441] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.398 08:19:51 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:18.398 08:19:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:18.656 [2024-02-13 08:19:52.143254] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:18.656 [2024-02-13 08:19:52.143456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.656 08:19:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:18.656 malloc0 00:19:18.656 08:19:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:18.914 08:19:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.172 [2024-02-13 08:19:52.624985] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:19.172 [2024-02-13 08:19:52.625014] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:19.172 [2024-02-13 08:19:52.625027] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:19.172 request: 00:19:19.172 { 00:19:19.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.172 "host": "nqn.2016-06.io.spdk:host1", 00:19:19.172 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:19.172 "method": "nvmf_subsystem_add_host", 00:19:19.172 "req_id": 1 00:19:19.172 } 00:19:19.172 Got JSON-RPC error response 00:19:19.172 response: 00:19:19.172 { 00:19:19.172 "code": -32603, 00:19:19.172 "message": "Internal error" 00:19:19.172 } 00:19:19.172 08:19:52 -- common/autotest_common.sh@641 -- # es=1 00:19:19.172 08:19:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.172 08:19:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.172 08:19:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.172 08:19:52 -- target/tls.sh@189 -- # killprocess 2289815 00:19:19.172 08:19:52 -- common/autotest_common.sh@924 -- # '[' -z 2289815 ']' 00:19:19.172 08:19:52 -- common/autotest_common.sh@928 -- # kill -0 2289815 00:19:19.172 08:19:52 -- common/autotest_common.sh@929 -- # uname 00:19:19.172 08:19:52 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:19.172 08:19:52 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2289815 00:19:19.172 08:19:52 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:19.172 08:19:52 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:19.172 08:19:52 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2289815' 00:19:19.172 killing process with pid 2289815 00:19:19.172 08:19:52 -- common/autotest_common.sh@943 -- # kill 2289815 00:19:19.172 08:19:52 -- common/autotest_common.sh@948 -- # wait 2289815 00:19:19.431 08:19:52 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:19.431 08:19:52 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:19:19.431 08:19:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:19.431 08:19:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:19.431 08:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.431 08:19:52 -- nvmf/common.sh@469 -- # nvmfpid=2290299 00:19:19.431 08:19:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.431 08:19:52 -- nvmf/common.sh@470 -- # waitforlisten 2290299 00:19:19.431 08:19:52 -- common/autotest_common.sh@817 -- # '[' -z 2290299 ']' 00:19:19.431 08:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.431 08:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.431 08:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.431 08:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.431 08:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.431 [2024-02-13 08:19:52.954786] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:19.431 [2024-02-13 08:19:52.954827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.431 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.431 [2024-02-13 08:19:53.017516] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.431 [2024-02-13 08:19:53.080430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.431 [2024-02-13 08:19:53.080540] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.431 [2024-02-13 08:19:53.080548] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.431 [2024-02-13 08:19:53.080553] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.431 [2024-02-13 08:19:53.080573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.366 08:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.366 08:19:53 -- common/autotest_common.sh@850 -- # return 0 00:19:20.366 08:19:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:20.366 08:19:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:20.366 08:19:53 -- common/autotest_common.sh@10 -- # set +x 00:19:20.366 08:19:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.366 08:19:53 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.366 08:19:53 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:20.366 08:19:53 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.366 [2024-02-13 08:19:53.922928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.366 08:19:53 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.623 08:19:54 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.623 [2024-02-13 08:19:54.247757] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.623 [2024-02-13 08:19:54.247946] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.623 08:19:54 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.881 malloc0 00:19:20.881 08:19:54 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.139 08:19:54 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:21.139 08:19:54 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.139 08:19:54 -- target/tls.sh@197 -- # bdevperf_pid=2290555 00:19:21.139 08:19:54 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.139 08:19:54 -- target/tls.sh@200 -- # waitforlisten 2290555 /var/tmp/bdevperf.sock 00:19:21.139 08:19:54 -- common/autotest_common.sh@817 -- # '[' -z 2290555 ']' 00:19:21.139 08:19:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.139 08:19:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.139 08:19:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.139 08:19:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.139 08:19:54 -- common/autotest_common.sh@10 -- # set +x 00:19:21.139 [2024-02-13 08:19:54.776438] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:21.139 [2024-02-13 08:19:54.776485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290555 ] 00:19:21.139 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.396 [2024-02-13 08:19:54.831595] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.396 [2024-02-13 08:19:54.900166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.960 08:19:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:21.960 08:19:55 -- common/autotest_common.sh@850 -- # return 0 00:19:21.960 08:19:55 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:22.217 [2024-02-13 08:19:55.725578] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.217 TLSTESTn1 00:19:22.218 08:19:55 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:22.475 08:19:56 -- target/tls.sh@205 -- # tgtconf='{ 00:19:22.475 "subsystems": [ 00:19:22.475 { 00:19:22.475 "subsystem": "iobuf", 00:19:22.475 "config": [ 00:19:22.475 { 00:19:22.475 "method": "iobuf_set_options", 00:19:22.475 "params": { 00:19:22.475 "small_pool_count": 8192, 00:19:22.475 "large_pool_count": 1024, 00:19:22.475 "small_bufsize": 8192, 00:19:22.475 "large_bufsize": 135168 00:19:22.475 } 00:19:22.475 } 00:19:22.475 ] 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "subsystem": "sock", 00:19:22.475 "config": [ 00:19:22.475 { 00:19:22.475 "method": "sock_impl_set_options", 00:19:22.475 "params": { 00:19:22.475 "impl_name": "posix", 00:19:22.475 "recv_buf_size": 2097152, 00:19:22.475 "send_buf_size": 2097152, 00:19:22.475 "enable_recv_pipe": true, 00:19:22.475 "enable_quickack": false, 00:19:22.475 "enable_placement_id": 0, 00:19:22.475 "enable_zerocopy_send_server": true, 00:19:22.475 "enable_zerocopy_send_client": false, 00:19:22.475 "zerocopy_threshold": 0, 00:19:22.475 "tls_version": 0, 00:19:22.475 "enable_ktls": false 00:19:22.475 } 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "method": "sock_impl_set_options", 00:19:22.475 "params": { 00:19:22.475 "impl_name": "ssl", 00:19:22.475 "recv_buf_size": 4096, 00:19:22.475 "send_buf_size": 4096, 00:19:22.475 "enable_recv_pipe": true, 00:19:22.475 "enable_quickack": false, 00:19:22.475 "enable_placement_id": 0, 00:19:22.475 "enable_zerocopy_send_server": true, 00:19:22.475 "enable_zerocopy_send_client": false, 00:19:22.475 "zerocopy_threshold": 0, 00:19:22.475 "tls_version": 0, 00:19:22.475 "enable_ktls": false 00:19:22.475 } 00:19:22.475 } 00:19:22.475 ] 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "subsystem": "vmd", 00:19:22.475 "config": [] 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "subsystem": "accel", 00:19:22.475 "config": [ 00:19:22.475 { 00:19:22.475 "method": "accel_set_options", 00:19:22.475 "params": { 00:19:22.475 "small_cache_size": 128, 00:19:22.475 "large_cache_size": 16, 00:19:22.475 "task_count": 2048, 00:19:22.475 "sequence_count": 2048, 00:19:22.475 "buf_count": 2048 00:19:22.475 } 00:19:22.475 } 00:19:22.475 ] 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "subsystem": "bdev", 00:19:22.475 "config": [ 00:19:22.475 { 00:19:22.475 "method": "bdev_set_options", 00:19:22.475 "params": { 00:19:22.475 "bdev_io_pool_size": 65535, 00:19:22.475 "bdev_io_cache_size": 256, 00:19:22.475 "bdev_auto_examine": true, 00:19:22.475 "iobuf_small_cache_size": 128, 00:19:22.475 "iobuf_large_cache_size": 16 00:19:22.475 } 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "method": "bdev_raid_set_options", 00:19:22.475 "params": { 00:19:22.475 "process_window_size_kb": 1024 00:19:22.475 } 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "method": "bdev_iscsi_set_options", 00:19:22.475 "params": { 00:19:22.475 "timeout_sec": 30 00:19:22.475 } 00:19:22.475 }, 00:19:22.475 { 00:19:22.475 "method": "bdev_nvme_set_options", 00:19:22.475 "params": { 00:19:22.475 "action_on_timeout": "none", 00:19:22.475 "timeout_us": 0, 00:19:22.475 "timeout_admin_us": 0, 00:19:22.475 "keep_alive_timeout_ms": 10000, 00:19:22.475 "arbitration_burst": 0, 00:19:22.476 "low_priority_weight": 0, 00:19:22.476 "medium_priority_weight": 0, 00:19:22.476 "high_priority_weight": 0, 00:19:22.476 "nvme_adminq_poll_period_us": 10000, 00:19:22.476 "nvme_ioq_poll_period_us": 0, 00:19:22.476 "io_queue_requests": 0, 00:19:22.476 "delay_cmd_submit": true, 00:19:22.476 "transport_retry_count": 4, 00:19:22.476 "bdev_retry_count": 3, 00:19:22.476 "transport_ack_timeout": 0, 00:19:22.476 "ctrlr_loss_timeout_sec": 0, 00:19:22.476 "reconnect_delay_sec": 0, 00:19:22.476 "fast_io_fail_timeout_sec": 0, 00:19:22.476 "disable_auto_failback": false, 00:19:22.476 "generate_uuids": false, 00:19:22.476 "transport_tos": 0, 00:19:22.476 "nvme_error_stat": false, 00:19:22.476 "rdma_srq_size": 0, 00:19:22.476 "io_path_stat": false, 00:19:22.476 "allow_accel_sequence": false, 00:19:22.476 "rdma_max_cq_size": 0, 00:19:22.476 "rdma_cm_event_timeout_ms": 0 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "bdev_nvme_set_hotplug", 00:19:22.476 "params": { 00:19:22.476 "period_us": 100000, 00:19:22.476 "enable": false 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "bdev_malloc_create", 00:19:22.476 "params": { 00:19:22.476 "name": "malloc0", 00:19:22.476 "num_blocks": 8192, 00:19:22.476 "block_size": 4096, 00:19:22.476 "physical_block_size": 4096, 00:19:22.476 "uuid": "a4286218-e100-455d-b508-4c1046c846fd", 00:19:22.476 "optimal_io_boundary": 0 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "bdev_wait_for_examine" 00:19:22.476 } 00:19:22.476 ] 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "subsystem": "nbd", 00:19:22.476 "config": [] 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "subsystem": "scheduler", 00:19:22.476 "config": [ 00:19:22.476 { 00:19:22.476 "method": "framework_set_scheduler", 00:19:22.476 "params": { 00:19:22.476 "name": "static" 00:19:22.476 } 00:19:22.476 } 00:19:22.476 ] 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "subsystem": "nvmf", 00:19:22.476 "config": [ 00:19:22.476 { 00:19:22.476 "method": "nvmf_set_config", 00:19:22.476 "params": { 00:19:22.476 "discovery_filter": "match_any", 00:19:22.476 "admin_cmd_passthru": { 00:19:22.476 "identify_ctrlr": false 00:19:22.476 } 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_set_max_subsystems", 00:19:22.476 "params": { 00:19:22.476 "max_subsystems": 1024 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_set_crdt", 00:19:22.476 "params": { 00:19:22.476 "crdt1": 0, 00:19:22.476 "crdt2": 0, 00:19:22.476 "crdt3": 0 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_create_transport", 00:19:22.476 "params": { 00:19:22.476 "trtype": "TCP", 00:19:22.476 "max_queue_depth": 128, 00:19:22.476 "max_io_qpairs_per_ctrlr": 127, 00:19:22.476 "in_capsule_data_size": 4096, 00:19:22.476 "max_io_size": 131072, 00:19:22.476 "io_unit_size": 131072, 00:19:22.476 "max_aq_depth": 128, 00:19:22.476 "num_shared_buffers": 511, 00:19:22.476 "buf_cache_size": 4294967295, 00:19:22.476 "dif_insert_or_strip": false, 00:19:22.476 "zcopy": false, 00:19:22.476 "c2h_success": false, 00:19:22.476 "sock_priority": 0, 00:19:22.476 "abort_timeout_sec": 1 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_create_subsystem", 00:19:22.476 "params": { 00:19:22.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.476 "allow_any_host": false, 00:19:22.476 "serial_number": "SPDK00000000000001", 00:19:22.476 "model_number": "SPDK bdev Controller", 00:19:22.476 "max_namespaces": 10, 00:19:22.476 "min_cntlid": 1, 00:19:22.476 "max_cntlid": 65519, 00:19:22.476 "ana_reporting": false 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_subsystem_add_host", 00:19:22.476 "params": { 00:19:22.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.476 "host": "nqn.2016-06.io.spdk:host1", 00:19:22.476 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_subsystem_add_ns", 00:19:22.476 "params": { 00:19:22.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.476 "namespace": { 00:19:22.476 "nsid": 1, 00:19:22.476 "bdev_name": "malloc0", 00:19:22.476 "nguid": "A4286218E100455DB5084C1046C846FD", 00:19:22.476 "uuid": "a4286218-e100-455d-b508-4c1046c846fd" 00:19:22.476 } 00:19:22.476 } 00:19:22.476 }, 00:19:22.476 { 00:19:22.476 "method": "nvmf_subsystem_add_listener", 00:19:22.476 "params": { 00:19:22.476 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.476 "listen_address": { 00:19:22.476 "trtype": "TCP", 00:19:22.476 "adrfam": "IPv4", 00:19:22.476 "traddr": "10.0.0.2", 00:19:22.476 "trsvcid": "4420" 00:19:22.476 }, 00:19:22.476 "secure_channel": true 00:19:22.476 } 00:19:22.476 } 00:19:22.476 ] 00:19:22.476 } 00:19:22.476 ] 00:19:22.476 }' 00:19:22.476 08:19:56 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:22.734 08:19:56 -- target/tls.sh@206 -- # bdevperfconf='{ 00:19:22.734 "subsystems": [ 00:19:22.734 { 00:19:22.734 "subsystem": "iobuf", 00:19:22.734 "config": [ 00:19:22.734 { 00:19:22.734 "method": "iobuf_set_options", 00:19:22.734 "params": { 00:19:22.734 "small_pool_count": 8192, 00:19:22.734 "large_pool_count": 1024, 00:19:22.734 "small_bufsize": 8192, 00:19:22.734 "large_bufsize": 135168 00:19:22.734 } 00:19:22.734 } 00:19:22.734 ] 00:19:22.734 }, 00:19:22.734 { 00:19:22.734 "subsystem": "sock", 00:19:22.734 "config": [ 00:19:22.734 { 00:19:22.734 "method": "sock_impl_set_options", 00:19:22.734 "params": { 00:19:22.734 "impl_name": "posix", 00:19:22.734 "recv_buf_size": 2097152, 00:19:22.734 "send_buf_size": 2097152, 00:19:22.734 "enable_recv_pipe": true, 00:19:22.734 "enable_quickack": false, 00:19:22.734 "enable_placement_id": 0, 00:19:22.734 "enable_zerocopy_send_server": true, 00:19:22.734 "enable_zerocopy_send_client": false, 00:19:22.734 "zerocopy_threshold": 0, 00:19:22.734 "tls_version": 0, 00:19:22.734 "enable_ktls": false 00:19:22.734 } 00:19:22.734 }, 00:19:22.734 { 00:19:22.734 "method": "sock_impl_set_options", 00:19:22.734 "params": { 00:19:22.734 "impl_name": "ssl", 00:19:22.734 "recv_buf_size": 4096, 00:19:22.734 "send_buf_size": 4096, 00:19:22.735 "enable_recv_pipe": true, 00:19:22.735 "enable_quickack": false, 00:19:22.735 "enable_placement_id": 0, 00:19:22.735 "enable_zerocopy_send_server": true, 00:19:22.735 "enable_zerocopy_send_client": false, 00:19:22.735 "zerocopy_threshold": 0, 00:19:22.735 "tls_version": 0, 00:19:22.735 "enable_ktls": false 00:19:22.735 } 00:19:22.735 } 00:19:22.735 ] 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "subsystem": "vmd", 00:19:22.735 "config": [] 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "subsystem": "accel", 00:19:22.735 "config": [ 00:19:22.735 { 00:19:22.735 "method": "accel_set_options", 00:19:22.735 "params": { 00:19:22.735 "small_cache_size": 128, 00:19:22.735 "large_cache_size": 16, 00:19:22.735 "task_count": 2048, 00:19:22.735 "sequence_count": 2048, 00:19:22.735 "buf_count": 2048 00:19:22.735 } 00:19:22.735 } 00:19:22.735 ] 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "subsystem": "bdev", 00:19:22.735 "config": [ 00:19:22.735 { 00:19:22.735 "method": "bdev_set_options", 00:19:22.735 "params": { 00:19:22.735 "bdev_io_pool_size": 65535, 00:19:22.735 "bdev_io_cache_size": 256, 00:19:22.735 "bdev_auto_examine": true, 00:19:22.735 "iobuf_small_cache_size": 128, 00:19:22.735 "iobuf_large_cache_size": 16 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_raid_set_options", 00:19:22.735 "params": { 00:19:22.735 "process_window_size_kb": 1024 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_iscsi_set_options", 00:19:22.735 "params": { 00:19:22.735 "timeout_sec": 30 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_nvme_set_options", 00:19:22.735 "params": { 00:19:22.735 "action_on_timeout": "none", 00:19:22.735 "timeout_us": 0, 00:19:22.735 "timeout_admin_us": 0, 00:19:22.735 "keep_alive_timeout_ms": 10000, 00:19:22.735 "arbitration_burst": 0, 00:19:22.735 "low_priority_weight": 0, 00:19:22.735 "medium_priority_weight": 0, 00:19:22.735 "high_priority_weight": 0, 00:19:22.735 "nvme_adminq_poll_period_us": 10000, 00:19:22.735 "nvme_ioq_poll_period_us": 0, 00:19:22.735 "io_queue_requests": 512, 00:19:22.735 "delay_cmd_submit": true, 00:19:22.735 "transport_retry_count": 4, 00:19:22.735 "bdev_retry_count": 3, 00:19:22.735 "transport_ack_timeout": 0, 00:19:22.735 "ctrlr_loss_timeout_sec": 0, 00:19:22.735 "reconnect_delay_sec": 0, 00:19:22.735 "fast_io_fail_timeout_sec": 0, 00:19:22.735 "disable_auto_failback": false, 00:19:22.735 "generate_uuids": false, 00:19:22.735 "transport_tos": 0, 00:19:22.735 "nvme_error_stat": false, 00:19:22.735 "rdma_srq_size": 0, 00:19:22.735 "io_path_stat": false, 00:19:22.735 "allow_accel_sequence": false, 00:19:22.735 "rdma_max_cq_size": 0, 00:19:22.735 "rdma_cm_event_timeout_ms": 0 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_nvme_attach_controller", 00:19:22.735 "params": { 00:19:22.735 "name": "TLSTEST", 00:19:22.735 "trtype": "TCP", 00:19:22.735 "adrfam": "IPv4", 00:19:22.735 "traddr": "10.0.0.2", 00:19:22.735 "trsvcid": "4420", 00:19:22.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.735 "prchk_reftag": false, 00:19:22.735 "prchk_guard": false, 00:19:22.735 "ctrlr_loss_timeout_sec": 0, 00:19:22.735 "reconnect_delay_sec": 0, 00:19:22.735 "fast_io_fail_timeout_sec": 0, 00:19:22.735 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:22.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.735 "hdgst": false, 00:19:22.735 "ddgst": false 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_nvme_set_hotplug", 00:19:22.735 "params": { 00:19:22.735 "period_us": 100000, 00:19:22.735 "enable": false 00:19:22.735 } 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "method": "bdev_wait_for_examine" 00:19:22.735 } 00:19:22.735 ] 00:19:22.735 }, 00:19:22.735 { 00:19:22.735 "subsystem": "nbd", 00:19:22.735 "config": [] 00:19:22.735 } 00:19:22.735 ] 00:19:22.735 }' 00:19:22.735 08:19:56 -- target/tls.sh@208 -- # killprocess 2290555 00:19:22.735 08:19:56 -- common/autotest_common.sh@924 -- # '[' -z 2290555 ']' 00:19:22.735 08:19:56 -- common/autotest_common.sh@928 -- # kill -0 2290555 00:19:22.735 08:19:56 -- common/autotest_common.sh@929 -- # uname 00:19:22.735 08:19:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:22.735 08:19:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2290555 00:19:22.735 08:19:56 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:22.735 08:19:56 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:22.735 08:19:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2290555' 00:19:22.735 killing process with pid 2290555 00:19:22.735 08:19:56 -- common/autotest_common.sh@943 -- # kill 2290555 00:19:22.735 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.735 00:19:22.735 Latency(us) 00:19:22.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.735 =================================================================================================================== 00:19:22.735 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.735 08:19:56 -- common/autotest_common.sh@948 -- # wait 2290555 00:19:23.036 08:19:56 -- target/tls.sh@209 -- # killprocess 2290299 00:19:23.036 08:19:56 -- common/autotest_common.sh@924 -- # '[' -z 2290299 ']' 00:19:23.036 08:19:56 -- common/autotest_common.sh@928 -- # kill -0 2290299 00:19:23.036 08:19:56 -- common/autotest_common.sh@929 -- # uname 00:19:23.036 08:19:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:23.036 08:19:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2290299 00:19:23.036 08:19:56 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:23.036 08:19:56 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:23.036 08:19:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2290299' 00:19:23.036 killing process with pid 2290299 00:19:23.036 08:19:56 -- common/autotest_common.sh@943 -- # kill 2290299 00:19:23.036 08:19:56 -- common/autotest_common.sh@948 -- # wait 2290299 00:19:23.320 08:19:56 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:23.320 08:19:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.320 08:19:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:23.320 08:19:56 -- target/tls.sh@212 -- # echo '{ 00:19:23.320 "subsystems": [ 00:19:23.320 { 00:19:23.320 "subsystem": "iobuf", 00:19:23.320 "config": [ 00:19:23.320 { 00:19:23.320 "method": "iobuf_set_options", 00:19:23.320 "params": { 00:19:23.320 "small_pool_count": 8192, 00:19:23.320 "large_pool_count": 1024, 00:19:23.320 "small_bufsize": 8192, 00:19:23.320 "large_bufsize": 135168 00:19:23.320 } 00:19:23.320 } 00:19:23.320 ] 00:19:23.320 }, 00:19:23.320 { 00:19:23.320 "subsystem": "sock", 00:19:23.320 "config": [ 00:19:23.320 { 00:19:23.320 "method": "sock_impl_set_options", 00:19:23.320 "params": { 00:19:23.320 "impl_name": "posix", 00:19:23.320 "recv_buf_size": 2097152, 00:19:23.320 "send_buf_size": 2097152, 00:19:23.320 "enable_recv_pipe": true, 00:19:23.320 "enable_quickack": false, 00:19:23.320 "enable_placement_id": 0, 00:19:23.320 "enable_zerocopy_send_server": true, 00:19:23.320 "enable_zerocopy_send_client": false, 00:19:23.320 "zerocopy_threshold": 0, 00:19:23.320 "tls_version": 0, 00:19:23.320 "enable_ktls": false 00:19:23.320 } 00:19:23.320 }, 00:19:23.320 { 00:19:23.320 "method": "sock_impl_set_options", 00:19:23.320 "params": { 00:19:23.320 "impl_name": "ssl", 00:19:23.320 "recv_buf_size": 4096, 00:19:23.320 "send_buf_size": 4096, 00:19:23.320 "enable_recv_pipe": true, 00:19:23.320 "enable_quickack": false, 00:19:23.320 "enable_placement_id": 0, 00:19:23.320 "enable_zerocopy_send_server": true, 00:19:23.320 "enable_zerocopy_send_client": false, 00:19:23.320 "zerocopy_threshold": 0, 00:19:23.320 "tls_version": 0, 00:19:23.320 "enable_ktls": false 00:19:23.320 } 00:19:23.320 } 00:19:23.320 ] 00:19:23.320 }, 00:19:23.320 { 00:19:23.320 "subsystem": "vmd", 00:19:23.320 "config": [] 00:19:23.320 }, 00:19:23.320 { 00:19:23.321 "subsystem": "accel", 00:19:23.321 "config": [ 00:19:23.321 { 00:19:23.321 "method": "accel_set_options", 00:19:23.321 "params": { 00:19:23.321 "small_cache_size": 128, 00:19:23.321 "large_cache_size": 16, 00:19:23.321 "task_count": 2048, 00:19:23.321 "sequence_count": 2048, 00:19:23.321 "buf_count": 2048 00:19:23.321 } 00:19:23.321 } 00:19:23.321 ] 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "subsystem": "bdev", 00:19:23.321 "config": [ 00:19:23.321 { 00:19:23.321 "method": "bdev_set_options", 00:19:23.321 "params": { 00:19:23.321 "bdev_io_pool_size": 65535, 00:19:23.321 "bdev_io_cache_size": 256, 00:19:23.321 "bdev_auto_examine": true, 00:19:23.321 "iobuf_small_cache_size": 128, 00:19:23.321 "iobuf_large_cache_size": 16 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_raid_set_options", 00:19:23.321 "params": { 00:19:23.321 "process_window_size_kb": 1024 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_iscsi_set_options", 00:19:23.321 "params": { 00:19:23.321 "timeout_sec": 30 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_nvme_set_options", 00:19:23.321 "params": { 00:19:23.321 "action_on_timeout": "none", 00:19:23.321 "timeout_us": 0, 00:19:23.321 "timeout_admin_us": 0, 00:19:23.321 "keep_alive_timeout_ms": 10000, 00:19:23.321 "arbitration_burst": 0, 00:19:23.321 "low_priority_weight": 0, 00:19:23.321 "medium_priority_weight": 0, 00:19:23.321 "high_priority_weight": 0, 00:19:23.321 "nvme_adminq_poll_period_us": 10000, 00:19:23.321 "nvme_ioq_poll_period_us": 0, 00:19:23.321 "io_queue_requests": 0, 00:19:23.321 "delay_cmd_submit": true, 00:19:23.321 "transport_retry_count": 4, 00:19:23.321 "bdev_retry_count": 3, 00:19:23.321 "transport_ack_timeout": 0, 00:19:23.321 "ctrlr_loss_timeout_sec": 0, 00:19:23.321 "reconnect_delay_sec": 0, 00:19:23.321 "fast_io_fail_timeout_sec": 0, 00:19:23.321 "disable_auto_failback": false, 00:19:23.321 "generate_uuids": false, 00:19:23.321 "transport_tos": 0, 00:19:23.321 "nvme_error_stat": false, 00:19:23.321 "rdma_srq_size": 0, 00:19:23.321 "io_path_stat": false, 00:19:23.321 "allow_accel_sequence": false, 00:19:23.321 "rdma_max_cq_size": 0, 00:19:23.321 "rdma_cm_event_timeout_ms": 0 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_nvme_set_hotplug", 00:19:23.321 "params": { 00:19:23.321 "period_us": 100000, 00:19:23.321 "enable": false 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_malloc_create", 00:19:23.321 "params": { 00:19:23.321 "name": "malloc0", 00:19:23.321 "num_blocks": 8192, 00:19:23.321 "block_size": 4096, 00:19:23.321 "physical_block_size": 4096, 00:19:23.321 "uuid": "a4286218-e100-455d-b508-4c1046c846fd", 00:19:23.321 "optimal_io_boundary": 0 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "bdev_wait_for_examine" 00:19:23.321 } 00:19:23.321 ] 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "subsystem": "nbd", 00:19:23.321 "config": [] 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "subsystem": "scheduler", 00:19:23.321 "config": [ 00:19:23.321 { 00:19:23.321 "method": "framework_set_scheduler", 00:19:23.321 "params": { 00:19:23.321 "name": "static" 00:19:23.321 } 00:19:23.321 } 00:19:23.321 ] 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "subsystem": "nvmf", 00:19:23.321 "config": [ 00:19:23.321 { 00:19:23.321 "method": "nvmf_set_config", 00:19:23.321 "params": { 00:19:23.321 "discovery_filter": "match_any", 00:19:23.321 "admin_cmd_passthru": { 00:19:23.321 "identify_ctrlr": false 00:19:23.321 } 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_set_max_subsystems", 00:19:23.321 "params": { 00:19:23.321 "max_subsystems": 1024 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_set_crdt", 00:19:23.321 "params": { 00:19:23.321 "crdt1": 0, 00:19:23.321 "crdt2": 0, 00:19:23.321 "crdt3": 0 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_create_transport", 00:19:23.321 "params": { 00:19:23.321 "trtype": "TCP", 00:19:23.321 "max_queue_depth": 128, 00:19:23.321 "max_io_qpairs_per_ctrlr": 127, 00:19:23.321 "in_capsule_data_size": 4096, 00:19:23.321 "max_io_size": 131072, 00:19:23.321 "io_unit_size": 131072, 00:19:23.321 "max_aq_depth": 128, 00:19:23.321 "num_shared_buffers": 511, 00:19:23.321 "buf_cache_size": 4294967295, 00:19:23.321 "dif_insert_or_strip": false, 00:19:23.321 "zcopy": false, 00:19:23.321 "c2h_success": false, 00:19:23.321 "sock_priority": 0, 00:19:23.321 "abort_timeout_sec": 1 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_create_subsystem", 00:19:23.321 "params": { 00:19:23.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.321 "allow_any_host": false, 00:19:23.321 "serial_number": "SPDK00000000000001", 00:19:23.321 "model_number": "SPDK bdev Controller", 00:19:23.321 "max_namespaces": 10, 00:19:23.321 "min_cntlid": 1, 00:19:23.321 "max_cntlid": 65519, 00:19:23.321 "ana_reporting": false 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_subsystem_add_host", 00:19:23.321 "params": { 00:19:23.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.321 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.321 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.321 "method": "nvmf_subsystem_add_ns", 00:19:23.321 "params": { 00:19:23.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.321 "namespace": { 00:19:23.321 "nsid": 1, 00:19:23.321 "bdev_name": "malloc0", 00:19:23.321 "nguid": "A4286218E100455DB5084C1046C846FD", 00:19:23.321 "uuid": "a4286218-e100-455d-b508-4c1046c846fd" 00:19:23.321 } 00:19:23.321 } 00:19:23.321 }, 00:19:23.321 { 00:19:23.322 "method": "nvmf_subsystem_add_listener", 00:19:23.322 "params": { 00:19:23.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.322 "listen_address": { 00:19:23.322 "trtype": "TCP", 00:19:23.322 "adrfam": "IPv4", 00:19:23.322 "traddr": "10.0.0.2", 00:19:23.322 "trsvcid": "4420" 00:19:23.322 }, 00:19:23.322 "secure_channel": true 00:19:23.322 } 00:19:23.322 } 00:19:23.322 ] 00:19:23.322 } 00:19:23.322 ] 00:19:23.322 }' 00:19:23.322 08:19:56 -- common/autotest_common.sh@10 -- # set +x 00:19:23.322 08:19:56 -- nvmf/common.sh@469 -- # nvmfpid=2291026 00:19:23.322 08:19:56 -- nvmf/common.sh@470 -- # waitforlisten 2291026 00:19:23.322 08:19:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:23.322 08:19:56 -- common/autotest_common.sh@817 -- # '[' -z 2291026 ']' 00:19:23.322 08:19:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.322 08:19:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.322 08:19:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.322 08:19:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.322 08:19:56 -- common/autotest_common.sh@10 -- # set +x 00:19:23.322 [2024-02-13 08:19:56.836441] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:23.322 [2024-02-13 08:19:56.836486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.322 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.322 [2024-02-13 08:19:56.900376] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.322 [2024-02-13 08:19:56.964547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.322 [2024-02-13 08:19:56.964661] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.322 [2024-02-13 08:19:56.964669] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.322 [2024-02-13 08:19:56.964675] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.322 [2024-02-13 08:19:56.964714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.322 [2024-02-13 08:19:56.964733] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:19:23.580 [2024-02-13 08:19:57.157915] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.580 [2024-02-13 08:19:57.189957] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.580 [2024-02-13 08:19:57.190154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.147 08:19:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.147 08:19:57 -- common/autotest_common.sh@850 -- # return 0 00:19:24.147 08:19:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.147 08:19:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:24.147 08:19:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.147 08:19:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.147 08:19:57 -- target/tls.sh@216 -- # bdevperf_pid=2291058 00:19:24.147 08:19:57 -- target/tls.sh@217 -- # waitforlisten 2291058 /var/tmp/bdevperf.sock 00:19:24.147 08:19:57 -- common/autotest_common.sh@817 -- # '[' -z 2291058 ']' 00:19:24.147 08:19:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.147 08:19:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.147 08:19:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.147 08:19:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.147 08:19:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.147 08:19:57 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:24.147 08:19:57 -- target/tls.sh@213 -- # echo '{ 00:19:24.147 "subsystems": [ 00:19:24.147 { 00:19:24.147 "subsystem": "iobuf", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "iobuf_set_options", 00:19:24.147 "params": { 00:19:24.147 "small_pool_count": 8192, 00:19:24.147 "large_pool_count": 1024, 00:19:24.147 "small_bufsize": 8192, 00:19:24.147 "large_bufsize": 135168 00:19:24.147 } 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "sock", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "sock_impl_set_options", 00:19:24.147 "params": { 00:19:24.147 "impl_name": "posix", 00:19:24.147 "recv_buf_size": 2097152, 00:19:24.147 "send_buf_size": 2097152, 00:19:24.147 "enable_recv_pipe": true, 00:19:24.147 "enable_quickack": false, 00:19:24.147 "enable_placement_id": 0, 00:19:24.147 "enable_zerocopy_send_server": true, 00:19:24.147 "enable_zerocopy_send_client": false, 00:19:24.147 "zerocopy_threshold": 0, 00:19:24.147 "tls_version": 0, 00:19:24.147 "enable_ktls": false 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "sock_impl_set_options", 00:19:24.147 "params": { 00:19:24.147 "impl_name": "ssl", 00:19:24.147 "recv_buf_size": 4096, 00:19:24.147 "send_buf_size": 4096, 00:19:24.147 "enable_recv_pipe": true, 00:19:24.147 "enable_quickack": false, 00:19:24.147 "enable_placement_id": 0, 00:19:24.147 "enable_zerocopy_send_server": true, 00:19:24.147 "enable_zerocopy_send_client": false, 00:19:24.147 "zerocopy_threshold": 0, 00:19:24.147 "tls_version": 0, 00:19:24.147 "enable_ktls": false 00:19:24.147 } 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "vmd", 00:19:24.147 "config": [] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "accel", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "accel_set_options", 00:19:24.147 "params": { 00:19:24.147 "small_cache_size": 128, 00:19:24.147 "large_cache_size": 16, 00:19:24.147 "task_count": 2048, 00:19:24.147 "sequence_count": 2048, 00:19:24.147 "buf_count": 2048 00:19:24.147 } 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "bdev", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "bdev_set_options", 00:19:24.147 "params": { 00:19:24.147 "bdev_io_pool_size": 65535, 00:19:24.147 "bdev_io_cache_size": 256, 00:19:24.147 "bdev_auto_examine": true, 00:19:24.147 "iobuf_small_cache_size": 128, 00:19:24.147 "iobuf_large_cache_size": 16 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_raid_set_options", 00:19:24.147 "params": { 00:19:24.147 "process_window_size_kb": 1024 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_iscsi_set_options", 00:19:24.147 "params": { 00:19:24.147 "timeout_sec": 30 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_nvme_set_options", 00:19:24.147 "params": { 00:19:24.147 "action_on_timeout": "none", 00:19:24.147 "timeout_us": 0, 00:19:24.147 "timeout_admin_us": 0, 00:19:24.147 "keep_alive_timeout_ms": 10000, 00:19:24.147 "arbitration_burst": 0, 00:19:24.147 "low_priority_weight": 0, 00:19:24.147 "medium_priority_weight": 0, 00:19:24.147 "high_priority_weight": 0, 00:19:24.147 "nvme_adminq_poll_period_us": 10000, 00:19:24.147 "nvme_ioq_poll_period_us": 0, 00:19:24.147 "io_queue_requests": 512, 00:19:24.147 "delay_cmd_submit": true, 00:19:24.147 "transport_retry_count": 4, 00:19:24.147 "bdev_retry_count": 3, 00:19:24.147 "transport_ack_timeout": 0, 00:19:24.147 "ctrlr_loss_timeout_sec": 0, 00:19:24.147 "reconnect_delay_sec": 0, 00:19:24.147 "fast_io_fail_timeout_sec": 0, 00:19:24.147 "disable_auto_failback": false, 00:19:24.147 "generate_uuids": false, 00:19:24.147 "transport_tos": 0, 00:19:24.147 "nvme_error_stat": false, 00:19:24.147 "rdma_srq_size": 0, 00:19:24.147 "io_path_stat": false, 00:19:24.147 "allow_accel_sequence": false, 00:19:24.147 "rdma_max_cq_size": 0, 00:19:24.147 "rdma_cm_event_timeout_ms": 0 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_nvme_attach_controller", 00:19:24.147 "params": { 00:19:24.147 "name": "TLSTEST", 00:19:24.147 "trtype": "TCP", 00:19:24.147 "adrfam": "IPv4", 00:19:24.147 "traddr": "10.0.0.2", 00:19:24.147 "trsvcid": "4420", 00:19:24.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.147 "prchk_reftag": false, 00:19:24.147 "prchk_guard": false, 00:19:24.147 "ctrlr_loss_timeout_sec": 0, 00:19:24.147 "reconnect_delay_sec": 0, 00:19:24.147 "fast_io_fail_timeout_sec": 0, 00:19:24.147 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:24.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.148 "hdgst": false, 00:19:24.148 "ddgst": false 00:19:24.148 } 00:19:24.148 }, 00:19:24.148 { 00:19:24.148 "method": "bdev_nvme_set_hotplug", 00:19:24.148 "params": { 00:19:24.148 "period_us": 100000, 00:19:24.148 "enable": false 00:19:24.148 } 00:19:24.148 }, 00:19:24.148 { 00:19:24.148 "method": "bdev_wait_for_examine" 00:19:24.148 } 00:19:24.148 ] 00:19:24.148 }, 00:19:24.148 { 00:19:24.148 "subsystem": "nbd", 00:19:24.148 "config": [] 00:19:24.148 } 00:19:24.148 ] 00:19:24.148 }' 00:19:24.148 [2024-02-13 08:19:57.687470] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:24.148 [2024-02-13 08:19:57.687511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291058 ] 00:19:24.148 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.148 [2024-02-13 08:19:57.739949] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.148 [2024-02-13 08:19:57.808627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.148 [2024-02-13 08:19:57.808683] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:19:24.406 [2024-02-13 08:19:57.940387] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.971 08:19:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.971 08:19:58 -- common/autotest_common.sh@850 -- # return 0 00:19:24.971 08:19:58 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:24.971 Running I/O for 10 seconds... 00:19:34.937 00:19:34.937 Latency(us) 00:19:34.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.937 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.937 Verification LBA range: start 0x0 length 0x2000 00:19:34.937 TLSTESTn1 : 10.02 2484.05 9.70 0.00 0.00 51480.10 6116.69 82887.44 00:19:34.937 =================================================================================================================== 00:19:34.937 Total : 2484.05 9.70 0.00 0.00 51480.10 6116.69 82887.44 00:19:34.937 0 00:19:34.937 08:20:08 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.937 08:20:08 -- target/tls.sh@223 -- # killprocess 2291058 00:19:34.937 08:20:08 -- common/autotest_common.sh@924 -- # '[' -z 2291058 ']' 00:19:34.937 08:20:08 -- common/autotest_common.sh@928 -- # kill -0 2291058 00:19:34.937 08:20:08 -- common/autotest_common.sh@929 -- # uname 00:19:35.195 08:20:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:35.195 08:20:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2291058 00:19:35.195 08:20:08 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:35.195 08:20:08 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:35.195 08:20:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2291058' 00:19:35.195 killing process with pid 2291058 00:19:35.195 08:20:08 -- common/autotest_common.sh@943 -- # kill 2291058 00:19:35.195 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.195 00:19:35.195 Latency(us) 00:19:35.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.195 =================================================================================================================== 00:19:35.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.195 [2024-02-13 08:20:08.669727] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:19:35.195 08:20:08 -- common/autotest_common.sh@948 -- # wait 2291058 00:19:35.195 08:20:08 -- target/tls.sh@224 -- # killprocess 2291026 00:19:35.195 08:20:08 -- common/autotest_common.sh@924 -- # '[' -z 2291026 ']' 00:19:35.195 08:20:08 -- common/autotest_common.sh@928 -- # kill -0 2291026 00:19:35.195 08:20:08 -- common/autotest_common.sh@929 -- # uname 00:19:35.195 08:20:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:35.195 08:20:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2291026 00:19:35.453 08:20:08 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:35.453 08:20:08 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:35.453 08:20:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2291026' 00:19:35.453 killing process with pid 2291026 00:19:35.453 08:20:08 -- common/autotest_common.sh@943 -- # kill 2291026 00:19:35.453 [2024-02-13 08:20:08.912722] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:19:35.453 08:20:08 -- common/autotest_common.sh@948 -- # wait 2291026 00:19:35.453 08:20:09 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:19:35.453 08:20:09 -- target/tls.sh@227 -- # cleanup 00:19:35.453 08:20:09 -- target/tls.sh@15 -- # process_shm --id 0 00:19:35.453 08:20:09 -- common/autotest_common.sh@794 -- # type=--id 00:19:35.453 08:20:09 -- common/autotest_common.sh@795 -- # id=0 00:19:35.453 08:20:09 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:35.453 08:20:09 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:35.453 08:20:09 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:35.453 08:20:09 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:35.453 08:20:09 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:35.453 08:20:09 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:35.453 nvmf_trace.0 00:19:35.711 08:20:09 -- common/autotest_common.sh@809 -- # return 0 00:19:35.711 08:20:09 -- target/tls.sh@16 -- # killprocess 2291058 00:19:35.711 08:20:09 -- common/autotest_common.sh@924 -- # '[' -z 2291058 ']' 00:19:35.711 08:20:09 -- common/autotest_common.sh@928 -- # kill -0 2291058 00:19:35.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2291058) - No such process 00:19:35.711 08:20:09 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2291058 is not found' 00:19:35.711 Process with pid 2291058 is not found 00:19:35.711 08:20:09 -- target/tls.sh@17 -- # nvmftestfini 00:19:35.711 08:20:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:35.711 08:20:09 -- nvmf/common.sh@116 -- # sync 00:19:35.711 08:20:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:35.711 08:20:09 -- nvmf/common.sh@119 -- # set +e 00:19:35.711 08:20:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:35.711 08:20:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:35.711 rmmod nvme_tcp 00:19:35.711 rmmod nvme_fabrics 00:19:35.711 rmmod nvme_keyring 00:19:35.711 08:20:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:35.711 08:20:09 -- nvmf/common.sh@123 -- # set -e 00:19:35.711 08:20:09 -- nvmf/common.sh@124 -- # return 0 00:19:35.711 08:20:09 -- nvmf/common.sh@477 -- # '[' -n 2291026 ']' 00:19:35.711 08:20:09 -- nvmf/common.sh@478 -- # killprocess 2291026 00:19:35.711 08:20:09 -- common/autotest_common.sh@924 -- # '[' -z 2291026 ']' 00:19:35.711 08:20:09 -- common/autotest_common.sh@928 -- # kill -0 2291026 00:19:35.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2291026) - No such process 00:19:35.711 08:20:09 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2291026 is not found' 00:19:35.711 Process with pid 2291026 is not found 00:19:35.711 08:20:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:35.711 08:20:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:35.711 08:20:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:35.711 08:20:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.711 08:20:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:35.711 08:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.711 08:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.711 08:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.611 08:20:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:37.611 08:20:11 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:37.870 00:19:37.870 real 1m12.700s 00:19:37.870 user 1m48.112s 00:19:37.870 sys 0m26.188s 00:19:37.870 08:20:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:37.870 08:20:11 -- common/autotest_common.sh@10 -- # set +x 00:19:37.870 ************************************ 00:19:37.870 END TEST nvmf_tls 00:19:37.870 ************************************ 00:19:37.870 08:20:11 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:37.870 08:20:11 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:37.870 08:20:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:37.870 08:20:11 -- common/autotest_common.sh@10 -- # set +x 00:19:37.870 ************************************ 00:19:37.870 START TEST nvmf_fips 00:19:37.870 ************************************ 00:19:37.870 08:20:11 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:37.870 * Looking for test storage... 00:19:37.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:37.870 08:20:11 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.870 08:20:11 -- nvmf/common.sh@7 -- # uname -s 00:19:37.871 08:20:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.871 08:20:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.871 08:20:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.871 08:20:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.871 08:20:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.871 08:20:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.871 08:20:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.871 08:20:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.871 08:20:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.871 08:20:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.871 08:20:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:37.871 08:20:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:37.871 08:20:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.871 08:20:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.871 08:20:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.871 08:20:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.871 08:20:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.871 08:20:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.871 08:20:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.871 08:20:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.871 08:20:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.871 08:20:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.871 08:20:11 -- paths/export.sh@5 -- # export PATH 00:19:37.871 08:20:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.871 08:20:11 -- nvmf/common.sh@46 -- # : 0 00:19:37.871 08:20:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:37.871 08:20:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:37.871 08:20:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:37.871 08:20:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.871 08:20:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.871 08:20:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:37.871 08:20:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:37.871 08:20:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:37.871 08:20:11 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:37.871 08:20:11 -- fips/fips.sh@89 -- # check_openssl_version 00:19:37.871 08:20:11 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:37.871 08:20:11 -- fips/fips.sh@85 -- # openssl version 00:19:37.871 08:20:11 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:37.871 08:20:11 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:37.871 08:20:11 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:37.871 08:20:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:37.871 08:20:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:37.871 08:20:11 -- scripts/common.sh@335 -- # IFS=.-: 00:19:37.871 08:20:11 -- scripts/common.sh@335 -- # read -ra ver1 00:19:37.871 08:20:11 -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.871 08:20:11 -- scripts/common.sh@336 -- # read -ra ver2 00:19:37.871 08:20:11 -- scripts/common.sh@337 -- # local 'op=>=' 00:19:37.871 08:20:11 -- scripts/common.sh@339 -- # ver1_l=3 00:19:37.871 08:20:11 -- scripts/common.sh@340 -- # ver2_l=3 00:19:37.871 08:20:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:37.871 08:20:11 -- scripts/common.sh@343 -- # case "$op" in 00:19:37.871 08:20:11 -- scripts/common.sh@347 -- # : 1 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # decimal 3 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=3 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 3 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # ver1[v]=3 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # decimal 3 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=3 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 3 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # ver2[v]=3 00:19:37.871 08:20:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:37.871 08:20:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v++ )) 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # decimal 0 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=0 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 0 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # ver1[v]=0 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # decimal 0 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=0 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 0 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:37.871 08:20:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:37.871 08:20:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v++ )) 00:19:37.871 08:20:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # decimal 9 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=9 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 9 00:19:37.871 08:20:11 -- scripts/common.sh@364 -- # ver1[v]=9 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # decimal 0 00:19:37.871 08:20:11 -- scripts/common.sh@352 -- # local d=0 00:19:37.871 08:20:11 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:37.871 08:20:11 -- scripts/common.sh@354 -- # echo 0 00:19:37.871 08:20:11 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:37.871 08:20:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:37.871 08:20:11 -- scripts/common.sh@366 -- # return 0 00:19:37.871 08:20:11 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:37.871 08:20:11 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:37.871 08:20:11 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:37.871 08:20:11 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:37.871 08:20:11 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:37.871 08:20:11 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:37.871 08:20:11 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:37.871 08:20:11 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:37.871 08:20:11 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:37.871 08:20:11 -- fips/fips.sh@114 -- # build_openssl_config 00:19:37.871 08:20:11 -- fips/fips.sh@37 -- # cat 00:19:37.871 08:20:11 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:37.871 08:20:11 -- fips/fips.sh@58 -- # cat - 00:19:37.871 08:20:11 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:37.871 08:20:11 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:37.871 08:20:11 -- fips/fips.sh@117 -- # mapfile -t providers 00:19:37.871 08:20:11 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:19:37.871 08:20:11 -- fips/fips.sh@117 -- # openssl list -providers 00:19:37.871 08:20:11 -- fips/fips.sh@117 -- # grep name 00:19:38.129 08:20:11 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:38.130 08:20:11 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:38.130 08:20:11 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:38.130 08:20:11 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:38.130 08:20:11 -- common/autotest_common.sh@638 -- # local es=0 00:19:38.130 08:20:11 -- fips/fips.sh@128 -- # : 00:19:38.130 08:20:11 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:38.130 08:20:11 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:38.130 08:20:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:38.130 08:20:11 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:38.130 08:20:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:38.130 08:20:11 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:38.130 08:20:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:38.130 08:20:11 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:38.130 08:20:11 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:38.130 08:20:11 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:38.130 Error setting digest 00:19:38.130 00F21D5B1C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:38.130 00F21D5B1C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:38.130 08:20:11 -- common/autotest_common.sh@641 -- # es=1 00:19:38.130 08:20:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:38.130 08:20:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:38.130 08:20:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:38.130 08:20:11 -- fips/fips.sh@131 -- # nvmftestinit 00:19:38.130 08:20:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:38.130 08:20:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.130 08:20:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:38.130 08:20:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:38.130 08:20:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:38.130 08:20:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.130 08:20:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.130 08:20:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.130 08:20:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:38.130 08:20:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:38.130 08:20:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:38.130 08:20:11 -- common/autotest_common.sh@10 -- # set +x 00:19:44.693 08:20:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:44.693 08:20:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:44.693 08:20:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:44.693 08:20:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:44.693 08:20:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:44.693 08:20:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:44.693 08:20:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:44.693 08:20:17 -- nvmf/common.sh@294 -- # net_devs=() 00:19:44.693 08:20:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:44.693 08:20:17 -- nvmf/common.sh@295 -- # e810=() 00:19:44.693 08:20:17 -- nvmf/common.sh@295 -- # local -ga e810 00:19:44.693 08:20:17 -- nvmf/common.sh@296 -- # x722=() 00:19:44.693 08:20:17 -- nvmf/common.sh@296 -- # local -ga x722 00:19:44.693 08:20:17 -- nvmf/common.sh@297 -- # mlx=() 00:19:44.693 08:20:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:44.693 08:20:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.693 08:20:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:44.693 08:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:44.693 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:44.693 08:20:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:44.693 08:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:44.693 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:44.693 08:20:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:44.693 08:20:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.693 08:20:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.693 08:20:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:44.693 Found net devices under 0000:af:00.0: cvl_0_0 00:19:44.693 08:20:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:44.693 08:20:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.693 08:20:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.693 08:20:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:44.693 Found net devices under 0000:af:00.1: cvl_0_1 00:19:44.693 08:20:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:44.693 08:20:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:44.693 08:20:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.693 08:20:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.693 08:20:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:44.693 08:20:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.693 08:20:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.693 08:20:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:44.693 08:20:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.693 08:20:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.693 08:20:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:44.693 08:20:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:44.693 08:20:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.693 08:20:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.693 08:20:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.693 08:20:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.693 08:20:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:44.693 08:20:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.693 08:20:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.693 08:20:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.693 08:20:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:44.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:19:44.693 00:19:44.693 --- 10.0.0.2 ping statistics --- 00:19:44.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.693 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:19:44.693 08:20:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:19:44.693 00:19:44.693 --- 10.0.0.1 ping statistics --- 00:19:44.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.693 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:44.693 08:20:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.693 08:20:17 -- nvmf/common.sh@410 -- # return 0 00:19:44.693 08:20:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:44.693 08:20:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.693 08:20:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:44.693 08:20:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.693 08:20:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:44.693 08:20:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:44.693 08:20:17 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:44.693 08:20:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:44.693 08:20:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:44.693 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:44.693 08:20:17 -- nvmf/common.sh@469 -- # nvmfpid=2296945 00:19:44.693 08:20:17 -- nvmf/common.sh@470 -- # waitforlisten 2296945 00:19:44.693 08:20:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.693 08:20:17 -- common/autotest_common.sh@817 -- # '[' -z 2296945 ']' 00:19:44.693 08:20:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.693 08:20:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:44.693 08:20:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.693 08:20:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:44.693 08:20:17 -- common/autotest_common.sh@10 -- # set +x 00:19:44.693 [2024-02-13 08:20:17.864183] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:44.693 [2024-02-13 08:20:17.864224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.693 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.693 [2024-02-13 08:20:17.925825] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.693 [2024-02-13 08:20:17.998378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.693 [2024-02-13 08:20:17.998489] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.693 [2024-02-13 08:20:17.998497] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.693 [2024-02-13 08:20:17.998503] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.693 [2024-02-13 08:20:17.998523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.951 08:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.951 08:20:18 -- common/autotest_common.sh@850 -- # return 0 00:19:44.951 08:20:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.951 08:20:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.951 08:20:18 -- common/autotest_common.sh@10 -- # set +x 00:19:45.209 08:20:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.209 08:20:18 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:45.209 08:20:18 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:45.209 08:20:18 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:45.209 08:20:18 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:45.209 08:20:18 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:45.209 08:20:18 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:45.209 08:20:18 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:45.209 08:20:18 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.209 [2024-02-13 08:20:18.817490] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.209 [2024-02-13 08:20:18.833501] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.209 [2024-02-13 08:20:18.833701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.209 malloc0 00:19:45.209 08:20:18 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.209 08:20:18 -- fips/fips.sh@148 -- # bdevperf_pid=2297099 00:19:45.209 08:20:18 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.209 08:20:18 -- fips/fips.sh@149 -- # waitforlisten 2297099 /var/tmp/bdevperf.sock 00:19:45.209 08:20:18 -- common/autotest_common.sh@817 -- # '[' -z 2297099 ']' 00:19:45.209 08:20:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.209 08:20:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:45.209 08:20:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.209 08:20:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:45.209 08:20:18 -- common/autotest_common.sh@10 -- # set +x 00:19:45.467 [2024-02-13 08:20:18.941788] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:45.467 [2024-02-13 08:20:18.941841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297099 ] 00:19:45.467 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.467 [2024-02-13 08:20:18.998607] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.467 [2024-02-13 08:20:19.067970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.400 08:20:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.400 08:20:19 -- common/autotest_common.sh@850 -- # return 0 00:19:46.400 08:20:19 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:46.400 [2024-02-13 08:20:19.861512] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.400 TLSTESTn1 00:19:46.400 08:20:19 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.400 Running I/O for 10 seconds... 00:19:58.596 00:19:58.596 Latency(us) 00:19:58.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.596 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.596 Verification LBA range: start 0x0 length 0x2000 00:19:58.596 TLSTESTn1 : 10.03 2178.04 8.51 0.00 0.00 58697.39 3916.56 69405.74 00:19:58.596 =================================================================================================================== 00:19:58.596 Total : 2178.04 8.51 0.00 0.00 58697.39 3916.56 69405.74 00:19:58.596 0 00:19:58.596 08:20:30 -- fips/fips.sh@1 -- # cleanup 00:19:58.596 08:20:30 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:58.596 08:20:30 -- common/autotest_common.sh@794 -- # type=--id 00:19:58.596 08:20:30 -- common/autotest_common.sh@795 -- # id=0 00:19:58.596 08:20:30 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:58.596 08:20:30 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:58.596 08:20:30 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:58.596 08:20:30 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:58.596 08:20:30 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:58.596 nvmf_trace.0 00:19:58.596 08:20:30 -- common/autotest_common.sh@809 -- # return 0 00:19:58.596 08:20:30 -- fips/fips.sh@16 -- # killprocess 2297099 00:19:58.596 08:20:30 -- common/autotest_common.sh@924 -- # '[' -z 2297099 ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@928 -- # kill -0 2297099 00:19:58.596 08:20:30 -- common/autotest_common.sh@929 -- # uname 00:19:58.596 08:20:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2297099 00:19:58.596 08:20:30 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:19:58.596 08:20:30 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2297099' 00:19:58.596 killing process with pid 2297099 00:19:58.596 08:20:30 -- common/autotest_common.sh@943 -- # kill 2297099 00:19:58.596 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.596 00:19:58.596 Latency(us) 00:19:58.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.596 =================================================================================================================== 00:19:58.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.596 08:20:30 -- common/autotest_common.sh@948 -- # wait 2297099 00:19:58.596 08:20:30 -- fips/fips.sh@17 -- # nvmftestfini 00:19:58.596 08:20:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.596 08:20:30 -- nvmf/common.sh@116 -- # sync 00:19:58.596 08:20:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.596 08:20:30 -- nvmf/common.sh@119 -- # set +e 00:19:58.596 08:20:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.596 08:20:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.596 rmmod nvme_tcp 00:19:58.596 rmmod nvme_fabrics 00:19:58.596 rmmod nvme_keyring 00:19:58.596 08:20:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.596 08:20:30 -- nvmf/common.sh@123 -- # set -e 00:19:58.596 08:20:30 -- nvmf/common.sh@124 -- # return 0 00:19:58.596 08:20:30 -- nvmf/common.sh@477 -- # '[' -n 2296945 ']' 00:19:58.596 08:20:30 -- nvmf/common.sh@478 -- # killprocess 2296945 00:19:58.596 08:20:30 -- common/autotest_common.sh@924 -- # '[' -z 2296945 ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@928 -- # kill -0 2296945 00:19:58.596 08:20:30 -- common/autotest_common.sh@929 -- # uname 00:19:58.596 08:20:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2296945 00:19:58.596 08:20:30 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:19:58.596 08:20:30 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:19:58.596 08:20:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2296945' 00:19:58.596 killing process with pid 2296945 00:19:58.596 08:20:30 -- common/autotest_common.sh@943 -- # kill 2296945 00:19:58.596 08:20:30 -- common/autotest_common.sh@948 -- # wait 2296945 00:19:58.596 08:20:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.596 08:20:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.596 08:20:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.596 08:20:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.596 08:20:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.596 08:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.596 08:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.596 08:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.164 08:20:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:59.164 08:20:32 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:59.164 00:19:59.164 real 0m21.454s 00:19:59.164 user 0m22.131s 00:19:59.164 sys 0m9.977s 00:19:59.164 08:20:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:59.164 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:19:59.164 ************************************ 00:19:59.164 END TEST nvmf_fips 00:19:59.164 ************************************ 00:19:59.164 08:20:32 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:19:59.164 08:20:32 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:59.164 08:20:32 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:19:59.164 08:20:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:59.164 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:19:59.164 ************************************ 00:19:59.164 START TEST nvmf_fuzz 00:19:59.164 ************************************ 00:19:59.164 08:20:32 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:59.423 * Looking for test storage... 00:19:59.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.423 08:20:32 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.423 08:20:32 -- nvmf/common.sh@7 -- # uname -s 00:19:59.423 08:20:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.423 08:20:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.423 08:20:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.423 08:20:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.423 08:20:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.423 08:20:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.423 08:20:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.423 08:20:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.423 08:20:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.423 08:20:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.423 08:20:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:59.423 08:20:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:59.423 08:20:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.423 08:20:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.423 08:20:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.423 08:20:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.423 08:20:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.423 08:20:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.423 08:20:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.423 08:20:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.423 08:20:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.423 08:20:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.423 08:20:32 -- paths/export.sh@5 -- # export PATH 00:19:59.423 08:20:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.423 08:20:32 -- nvmf/common.sh@46 -- # : 0 00:19:59.423 08:20:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.423 08:20:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.423 08:20:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.423 08:20:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.423 08:20:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.423 08:20:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.423 08:20:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.423 08:20:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.423 08:20:32 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:59.423 08:20:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.423 08:20:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.423 08:20:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.423 08:20:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.423 08:20:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.423 08:20:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.423 08:20:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.423 08:20:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.423 08:20:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:59.423 08:20:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:59.423 08:20:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:59.423 08:20:32 -- common/autotest_common.sh@10 -- # set +x 00:20:06.023 08:20:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:06.023 08:20:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:06.023 08:20:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:06.023 08:20:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:06.023 08:20:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:06.023 08:20:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:06.023 08:20:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:06.023 08:20:38 -- nvmf/common.sh@294 -- # net_devs=() 00:20:06.023 08:20:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:06.023 08:20:38 -- nvmf/common.sh@295 -- # e810=() 00:20:06.023 08:20:38 -- nvmf/common.sh@295 -- # local -ga e810 00:20:06.023 08:20:38 -- nvmf/common.sh@296 -- # x722=() 00:20:06.023 08:20:38 -- nvmf/common.sh@296 -- # local -ga x722 00:20:06.023 08:20:38 -- nvmf/common.sh@297 -- # mlx=() 00:20:06.023 08:20:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:06.023 08:20:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.023 08:20:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:06.023 08:20:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:06.023 08:20:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:06.023 08:20:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.023 08:20:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:06.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:06.023 08:20:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.023 08:20:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:06.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:06.023 08:20:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:06.023 08:20:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.023 08:20:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.023 08:20:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.023 08:20:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.023 08:20:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:06.023 Found net devices under 0000:af:00.0: cvl_0_0 00:20:06.023 08:20:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.023 08:20:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.023 08:20:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.023 08:20:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.023 08:20:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.023 08:20:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:06.023 Found net devices under 0000:af:00.1: cvl_0_1 00:20:06.023 08:20:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.023 08:20:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:06.023 08:20:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:06.023 08:20:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:06.023 08:20:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:06.023 08:20:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.023 08:20:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.023 08:20:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.024 08:20:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:06.024 08:20:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.024 08:20:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.024 08:20:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:06.024 08:20:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.024 08:20:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.024 08:20:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:06.024 08:20:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:06.024 08:20:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.024 08:20:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.024 08:20:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.024 08:20:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.024 08:20:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:06.024 08:20:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.024 08:20:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.024 08:20:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.024 08:20:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:06.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:06.024 00:20:06.024 --- 10.0.0.2 ping statistics --- 00:20:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.024 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:06.024 08:20:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:20:06.024 00:20:06.024 --- 10.0.0.1 ping statistics --- 00:20:06.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.024 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:06.024 08:20:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.024 08:20:39 -- nvmf/common.sh@410 -- # return 0 00:20:06.024 08:20:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.024 08:20:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.024 08:20:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.024 08:20:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.024 08:20:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.024 08:20:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.024 08:20:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.024 08:20:39 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2302837 00:20:06.024 08:20:39 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:06.024 08:20:39 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:06.024 08:20:39 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2302837 00:20:06.024 08:20:39 -- common/autotest_common.sh@817 -- # '[' -z 2302837 ']' 00:20:06.024 08:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.024 08:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.024 08:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.024 08:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.024 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.282 08:20:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:06.282 08:20:39 -- common/autotest_common.sh@850 -- # return 0 00:20:06.282 08:20:39 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.282 08:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.282 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 08:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.540 08:20:39 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:06.540 08:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.540 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 Malloc0 00:20:06.540 08:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.540 08:20:39 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.540 08:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.541 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.541 08:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.541 08:20:40 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.541 08:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.541 08:20:40 -- common/autotest_common.sh@10 -- # set +x 00:20:06.541 08:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.541 08:20:40 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.541 08:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.541 08:20:40 -- common/autotest_common.sh@10 -- # set +x 00:20:06.541 08:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.541 08:20:40 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:06.541 08:20:40 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:38.610 Fuzzing completed. Shutting down the fuzz application 00:20:38.610 00:20:38.610 Dumping successful admin opcodes: 00:20:38.610 8, 9, 10, 24, 00:20:38.610 Dumping successful io opcodes: 00:20:38.610 0, 9, 00:20:38.610 NS: 0x200003aeff00 I/O qp, Total commands completed: 1002353, total successful commands: 5875, random_seed: 4168705152 00:20:38.610 NS: 0x200003aeff00 admin qp, Total commands completed: 117803, total successful commands: 964, random_seed: 1059626368 00:20:38.610 08:21:10 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:38.610 Fuzzing completed. Shutting down the fuzz application 00:20:38.610 00:20:38.610 Dumping successful admin opcodes: 00:20:38.610 24, 00:20:38.610 Dumping successful io opcodes: 00:20:38.610 00:20:38.610 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1126524854 00:20:38.610 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1126598742 00:20:38.610 08:21:11 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.610 08:21:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.610 08:21:11 -- common/autotest_common.sh@10 -- # set +x 00:20:38.610 08:21:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.610 08:21:11 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:38.610 08:21:11 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:38.610 08:21:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:38.610 08:21:11 -- nvmf/common.sh@116 -- # sync 00:20:38.610 08:21:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:38.610 08:21:11 -- nvmf/common.sh@119 -- # set +e 00:20:38.610 08:21:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:38.610 08:21:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:38.610 rmmod nvme_tcp 00:20:38.610 rmmod nvme_fabrics 00:20:38.610 rmmod nvme_keyring 00:20:38.610 08:21:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:38.610 08:21:11 -- nvmf/common.sh@123 -- # set -e 00:20:38.610 08:21:11 -- nvmf/common.sh@124 -- # return 0 00:20:38.610 08:21:11 -- nvmf/common.sh@477 -- # '[' -n 2302837 ']' 00:20:38.610 08:21:11 -- nvmf/common.sh@478 -- # killprocess 2302837 00:20:38.610 08:21:11 -- common/autotest_common.sh@924 -- # '[' -z 2302837 ']' 00:20:38.610 08:21:11 -- common/autotest_common.sh@928 -- # kill -0 2302837 00:20:38.610 08:21:11 -- common/autotest_common.sh@929 -- # uname 00:20:38.610 08:21:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:38.610 08:21:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2302837 00:20:38.610 08:21:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:38.610 08:21:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:38.610 08:21:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2302837' 00:20:38.610 killing process with pid 2302837 00:20:38.610 08:21:11 -- common/autotest_common.sh@943 -- # kill 2302837 00:20:38.610 08:21:11 -- common/autotest_common.sh@948 -- # wait 2302837 00:20:38.610 08:21:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:38.610 08:21:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:38.610 08:21:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:38.610 08:21:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.610 08:21:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:38.610 08:21:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.610 08:21:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.610 08:21:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.512 08:21:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:40.512 08:21:14 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:40.771 00:20:40.771 real 0m41.385s 00:20:40.771 user 0m54.954s 00:20:40.771 sys 0m15.824s 00:20:40.771 08:21:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:40.771 08:21:14 -- common/autotest_common.sh@10 -- # set +x 00:20:40.771 ************************************ 00:20:40.771 END TEST nvmf_fuzz 00:20:40.771 ************************************ 00:20:40.771 08:21:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:40.771 08:21:14 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:20:40.771 08:21:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:40.771 08:21:14 -- common/autotest_common.sh@10 -- # set +x 00:20:40.771 ************************************ 00:20:40.771 START TEST nvmf_multiconnection 00:20:40.771 ************************************ 00:20:40.771 08:21:14 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:40.771 * Looking for test storage... 00:20:40.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.771 08:21:14 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.771 08:21:14 -- nvmf/common.sh@7 -- # uname -s 00:20:40.771 08:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.771 08:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.771 08:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.771 08:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.771 08:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.771 08:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.771 08:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.771 08:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.771 08:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.771 08:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.771 08:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:40.771 08:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:40.771 08:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.771 08:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.771 08:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.771 08:21:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.771 08:21:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.771 08:21:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.771 08:21:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.771 08:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.771 08:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.771 08:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.771 08:21:14 -- paths/export.sh@5 -- # export PATH 00:20:40.771 08:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.771 08:21:14 -- nvmf/common.sh@46 -- # : 0 00:20:40.771 08:21:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.771 08:21:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.771 08:21:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.771 08:21:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.771 08:21:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.771 08:21:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.771 08:21:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.771 08:21:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.771 08:21:14 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.771 08:21:14 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.771 08:21:14 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:40.771 08:21:14 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:40.771 08:21:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.771 08:21:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.771 08:21:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.771 08:21:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.771 08:21:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.771 08:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.771 08:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.771 08:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.771 08:21:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:40.771 08:21:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:40.771 08:21:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:40.771 08:21:14 -- common/autotest_common.sh@10 -- # set +x 00:20:47.333 08:21:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:47.333 08:21:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:47.333 08:21:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:47.333 08:21:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:47.333 08:21:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:47.333 08:21:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:47.333 08:21:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:47.333 08:21:20 -- nvmf/common.sh@294 -- # net_devs=() 00:20:47.333 08:21:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:47.333 08:21:20 -- nvmf/common.sh@295 -- # e810=() 00:20:47.333 08:21:20 -- nvmf/common.sh@295 -- # local -ga e810 00:20:47.333 08:21:20 -- nvmf/common.sh@296 -- # x722=() 00:20:47.333 08:21:20 -- nvmf/common.sh@296 -- # local -ga x722 00:20:47.333 08:21:20 -- nvmf/common.sh@297 -- # mlx=() 00:20:47.333 08:21:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:47.333 08:21:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.333 08:21:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:47.333 08:21:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:47.333 08:21:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:47.333 08:21:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:47.333 08:21:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:47.333 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:47.333 08:21:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:47.333 08:21:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:47.333 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:47.333 08:21:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:47.333 08:21:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:47.334 08:21:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:47.334 08:21:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.334 08:21:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:47.334 08:21:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.334 08:21:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:47.334 Found net devices under 0000:af:00.0: cvl_0_0 00:20:47.334 08:21:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.334 08:21:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:47.334 08:21:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.334 08:21:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:47.334 08:21:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.334 08:21:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:47.334 Found net devices under 0000:af:00.1: cvl_0_1 00:20:47.334 08:21:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.334 08:21:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:47.334 08:21:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:47.334 08:21:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:47.334 08:21:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.334 08:21:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.334 08:21:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.334 08:21:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:47.334 08:21:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.334 08:21:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.334 08:21:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:47.334 08:21:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.334 08:21:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.334 08:21:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:47.334 08:21:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:47.334 08:21:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.334 08:21:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.334 08:21:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.334 08:21:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.334 08:21:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:47.334 08:21:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.334 08:21:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.334 08:21:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.334 08:21:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:47.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:20:47.334 00:20:47.334 --- 10.0.0.2 ping statistics --- 00:20:47.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.334 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:47.334 08:21:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:20:47.334 00:20:47.334 --- 10.0.0.1 ping statistics --- 00:20:47.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.334 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:47.334 08:21:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.334 08:21:20 -- nvmf/common.sh@410 -- # return 0 00:20:47.334 08:21:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:47.334 08:21:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.334 08:21:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:47.334 08:21:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.334 08:21:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:47.334 08:21:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:47.334 08:21:20 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:47.334 08:21:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.334 08:21:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:47.334 08:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:47.334 08:21:20 -- nvmf/common.sh@469 -- # nvmfpid=2312463 00:20:47.334 08:21:20 -- nvmf/common.sh@470 -- # waitforlisten 2312463 00:20:47.334 08:21:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.334 08:21:20 -- common/autotest_common.sh@817 -- # '[' -z 2312463 ']' 00:20:47.334 08:21:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.334 08:21:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.334 08:21:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.334 08:21:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.334 08:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:47.334 [2024-02-13 08:21:20.481426] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:20:47.334 [2024-02-13 08:21:20.481471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.334 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.334 [2024-02-13 08:21:20.548070] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.334 [2024-02-13 08:21:20.620804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.334 [2024-02-13 08:21:20.620919] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.334 [2024-02-13 08:21:20.620926] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.334 [2024-02-13 08:21:20.620932] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.334 [2024-02-13 08:21:20.620977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.334 [2024-02-13 08:21:20.620997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.334 [2024-02-13 08:21:20.621101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.334 [2024-02-13 08:21:20.621102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.591 08:21:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.591 08:21:21 -- common/autotest_common.sh@850 -- # return 0 00:20:47.591 08:21:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:47.591 08:21:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.591 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.850 08:21:21 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 [2024-02-13 08:21:21.311818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.850 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 Malloc1 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 [2024-02-13 08:21:21.367253] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.850 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 Malloc2 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.850 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 Malloc3 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.850 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 Malloc4 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:47.850 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 Malloc5 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:47.850 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.850 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:47.850 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.850 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.109 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 Malloc6 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.109 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 Malloc7 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.109 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 Malloc8 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.109 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 Malloc9 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.109 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 Malloc10 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.109 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:48.109 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.109 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.109 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.110 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:48.110 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.110 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.110 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.110 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:48.110 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.110 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.110 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.110 08:21:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.110 08:21:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:48.110 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.110 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.110 Malloc11 00:20:48.110 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.110 08:21:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:48.110 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.110 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.110 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.110 08:21:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:48.110 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.368 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.368 08:21:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:48.368 08:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.368 08:21:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 08:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.368 08:21:21 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:48.368 08:21:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.368 08:21:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:49.750 08:21:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:49.750 08:21:23 -- common/autotest_common.sh@1175 -- # local i=0 00:20:49.750 08:21:23 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.750 08:21:23 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:49.750 08:21:23 -- common/autotest_common.sh@1182 -- # sleep 2 00:20:51.652 08:21:25 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:20:51.652 08:21:25 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:20:51.652 08:21:25 -- common/autotest_common.sh@1184 -- # grep -c SPDK1 00:20:51.652 08:21:25 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:20:51.652 08:21:25 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.652 08:21:25 -- common/autotest_common.sh@1185 -- # return 0 00:20:51.652 08:21:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:51.652 08:21:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:52.588 08:21:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:52.588 08:21:26 -- common/autotest_common.sh@1175 -- # local i=0 00:20:52.588 08:21:26 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.588 08:21:26 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:52.588 08:21:26 -- common/autotest_common.sh@1182 -- # sleep 2 00:20:55.118 08:21:28 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:20:55.118 08:21:28 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:20:55.118 08:21:28 -- common/autotest_common.sh@1184 -- # grep -c SPDK2 00:20:55.118 08:21:28 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:20:55.118 08:21:28 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.119 08:21:28 -- common/autotest_common.sh@1185 -- # return 0 00:20:55.119 08:21:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:55.119 08:21:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:56.055 08:21:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:56.055 08:21:29 -- common/autotest_common.sh@1175 -- # local i=0 00:20:56.055 08:21:29 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.055 08:21:29 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:56.055 08:21:29 -- common/autotest_common.sh@1182 -- # sleep 2 00:20:57.954 08:21:31 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:20:57.954 08:21:31 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:20:57.954 08:21:31 -- common/autotest_common.sh@1184 -- # grep -c SPDK3 00:20:57.954 08:21:31 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:20:57.954 08:21:31 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:20:57.954 08:21:31 -- common/autotest_common.sh@1185 -- # return 0 00:20:57.954 08:21:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.954 08:21:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:59.327 08:21:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:59.327 08:21:32 -- common/autotest_common.sh@1175 -- # local i=0 00:20:59.327 08:21:32 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.327 08:21:32 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:20:59.327 08:21:32 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:01.228 08:21:34 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:01.228 08:21:34 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:01.228 08:21:34 -- common/autotest_common.sh@1184 -- # grep -c SPDK4 00:21:01.228 08:21:34 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:01.228 08:21:34 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.228 08:21:34 -- common/autotest_common.sh@1185 -- # return 0 00:21:01.228 08:21:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.228 08:21:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:02.603 08:21:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:02.603 08:21:35 -- common/autotest_common.sh@1175 -- # local i=0 00:21:02.603 08:21:35 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:02.603 08:21:35 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:02.603 08:21:35 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:04.504 08:21:37 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:04.504 08:21:37 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:04.504 08:21:37 -- common/autotest_common.sh@1184 -- # grep -c SPDK5 00:21:04.504 08:21:37 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:04.504 08:21:38 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:04.504 08:21:38 -- common/autotest_common.sh@1185 -- # return 0 00:21:04.504 08:21:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.504 08:21:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:05.882 08:21:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:05.882 08:21:39 -- common/autotest_common.sh@1175 -- # local i=0 00:21:05.882 08:21:39 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:05.882 08:21:39 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:05.882 08:21:39 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:07.783 08:21:41 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:07.783 08:21:41 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:07.783 08:21:41 -- common/autotest_common.sh@1184 -- # grep -c SPDK6 00:21:07.783 08:21:41 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:07.783 08:21:41 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:07.783 08:21:41 -- common/autotest_common.sh@1185 -- # return 0 00:21:07.783 08:21:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:07.783 08:21:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:09.170 08:21:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:09.170 08:21:42 -- common/autotest_common.sh@1175 -- # local i=0 00:21:09.170 08:21:42 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.170 08:21:42 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:09.170 08:21:42 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:11.132 08:21:44 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:11.132 08:21:44 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:11.132 08:21:44 -- common/autotest_common.sh@1184 -- # grep -c SPDK7 00:21:11.132 08:21:44 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:11.132 08:21:44 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.132 08:21:44 -- common/autotest_common.sh@1185 -- # return 0 00:21:11.132 08:21:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.132 08:21:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:12.509 08:21:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:12.509 08:21:46 -- common/autotest_common.sh@1175 -- # local i=0 00:21:12.509 08:21:46 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:12.509 08:21:46 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:12.509 08:21:46 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:15.040 08:21:48 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:15.040 08:21:48 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:15.040 08:21:48 -- common/autotest_common.sh@1184 -- # grep -c SPDK8 00:21:15.040 08:21:48 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:15.040 08:21:48 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.040 08:21:48 -- common/autotest_common.sh@1185 -- # return 0 00:21:15.040 08:21:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.040 08:21:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:15.975 08:21:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:15.975 08:21:49 -- common/autotest_common.sh@1175 -- # local i=0 00:21:15.975 08:21:49 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.975 08:21:49 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:15.975 08:21:49 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:17.875 08:21:51 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:17.875 08:21:51 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:17.875 08:21:51 -- common/autotest_common.sh@1184 -- # grep -c SPDK9 00:21:17.875 08:21:51 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:17.875 08:21:51 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.875 08:21:51 -- common/autotest_common.sh@1185 -- # return 0 00:21:17.875 08:21:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.875 08:21:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:19.250 08:21:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:19.250 08:21:52 -- common/autotest_common.sh@1175 -- # local i=0 00:21:19.250 08:21:52 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.250 08:21:52 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:19.250 08:21:52 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:21.779 08:21:54 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:21.779 08:21:54 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:21.779 08:21:54 -- common/autotest_common.sh@1184 -- # grep -c SPDK10 00:21:21.779 08:21:54 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:21.779 08:21:54 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:21.779 08:21:54 -- common/autotest_common.sh@1185 -- # return 0 00:21:21.779 08:21:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.779 08:21:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:23.154 08:21:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:23.154 08:21:56 -- common/autotest_common.sh@1175 -- # local i=0 00:21:23.154 08:21:56 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.154 08:21:56 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:21:23.154 08:21:56 -- common/autotest_common.sh@1182 -- # sleep 2 00:21:25.055 08:21:58 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:21:25.055 08:21:58 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:21:25.055 08:21:58 -- common/autotest_common.sh@1184 -- # grep -c SPDK11 00:21:25.055 08:21:58 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:21:25.055 08:21:58 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.055 08:21:58 -- common/autotest_common.sh@1185 -- # return 0 00:21:25.055 08:21:58 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:25.055 [global] 00:21:25.055 thread=1 00:21:25.055 invalidate=1 00:21:25.055 rw=read 00:21:25.055 time_based=1 00:21:25.055 runtime=10 00:21:25.055 ioengine=libaio 00:21:25.055 direct=1 00:21:25.055 bs=262144 00:21:25.055 iodepth=64 00:21:25.055 norandommap=1 00:21:25.055 numjobs=1 00:21:25.055 00:21:25.055 [job0] 00:21:25.055 filename=/dev/nvme0n1 00:21:25.055 [job1] 00:21:25.055 filename=/dev/nvme10n1 00:21:25.055 [job2] 00:21:25.055 filename=/dev/nvme11n1 00:21:25.055 [job3] 00:21:25.055 filename=/dev/nvme2n1 00:21:25.055 [job4] 00:21:25.055 filename=/dev/nvme3n1 00:21:25.055 [job5] 00:21:25.055 filename=/dev/nvme4n1 00:21:25.055 [job6] 00:21:25.055 filename=/dev/nvme5n1 00:21:25.055 [job7] 00:21:25.055 filename=/dev/nvme6n1 00:21:25.055 [job8] 00:21:25.055 filename=/dev/nvme7n1 00:21:25.055 [job9] 00:21:25.055 filename=/dev/nvme8n1 00:21:25.055 [job10] 00:21:25.055 filename=/dev/nvme9n1 00:21:25.055 Could not set queue depth (nvme0n1) 00:21:25.055 Could not set queue depth (nvme10n1) 00:21:25.055 Could not set queue depth (nvme11n1) 00:21:25.055 Could not set queue depth (nvme2n1) 00:21:25.055 Could not set queue depth (nvme3n1) 00:21:25.055 Could not set queue depth (nvme4n1) 00:21:25.055 Could not set queue depth (nvme5n1) 00:21:25.055 Could not set queue depth (nvme6n1) 00:21:25.055 Could not set queue depth (nvme7n1) 00:21:25.055 Could not set queue depth (nvme8n1) 00:21:25.055 Could not set queue depth (nvme9n1) 00:21:25.313 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:25.313 fio-3.35 00:21:25.313 Starting 11 threads 00:21:37.569 00:21:37.569 job0: (groupid=0, jobs=1): err= 0: pid=2319353: Tue Feb 13 08:22:09 2024 00:21:37.569 read: IOPS=733, BW=183MiB/s (192MB/s)(1850MiB/10094msec) 00:21:37.569 slat (usec): min=9, max=189249, avg=971.82, stdev=5219.86 00:21:37.569 clat (usec): min=1554, max=385438, avg=86249.36, stdev=55551.31 00:21:37.569 lat (usec): min=1585, max=385469, avg=87221.18, stdev=56268.31 00:21:37.569 clat percentiles (msec): 00:21:37.569 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 19], 20.00th=[ 37], 00:21:37.569 | 30.00th=[ 52], 40.00th=[ 66], 50.00th=[ 80], 60.00th=[ 96], 00:21:37.569 | 70.00th=[ 111], 80.00th=[ 126], 90.00th=[ 161], 95.00th=[ 197], 00:21:37.569 | 99.00th=[ 234], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:21:37.569 | 99.99th=[ 384] 00:21:37.569 bw ( KiB/s): min=83968, max=394240, per=9.12%, avg=187801.60, stdev=73896.90, samples=20 00:21:37.569 iops : min= 328, max= 1540, avg=733.60, stdev=288.66, samples=20 00:21:37.569 lat (msec) : 2=0.04%, 4=0.27%, 10=3.22%, 20=7.81%, 50=17.56% 00:21:37.569 lat (msec) : 100=33.07%, 250=37.23%, 500=0.80% 00:21:37.569 cpu : usr=0.30%, sys=2.33%, ctx=1991, majf=0, minf=4097 00:21:37.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:21:37.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.569 issued rwts: total=7399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.569 job1: (groupid=0, jobs=1): err= 0: pid=2319357: Tue Feb 13 08:22:09 2024 00:21:37.569 read: IOPS=854, BW=214MiB/s (224MB/s)(2155MiB/10083msec) 00:21:37.569 slat (usec): min=10, max=126817, avg=885.47, stdev=4434.68 00:21:37.569 clat (usec): min=1652, max=320145, avg=73911.99, stdev=56432.60 00:21:37.569 lat (usec): min=1681, max=320173, avg=74797.46, stdev=57088.20 00:21:37.569 clat percentiles (msec): 00:21:37.569 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 30], 00:21:37.569 | 30.00th=[ 35], 40.00th=[ 43], 50.00th=[ 52], 60.00th=[ 68], 00:21:37.569 | 70.00th=[ 92], 80.00th=[ 125], 90.00th=[ 165], 95.00th=[ 182], 00:21:37.569 | 99.00th=[ 230], 99.50th=[ 266], 99.90th=[ 292], 99.95th=[ 292], 00:21:37.569 | 99.99th=[ 321] 00:21:37.569 bw ( KiB/s): min=94208, max=459776, per=10.63%, avg=219044.55, stdev=112778.72, samples=20 00:21:37.569 iops : min= 368, max= 1796, avg=855.60, stdev=440.59, samples=20 00:21:37.569 lat (msec) : 2=0.10%, 4=0.80%, 10=3.46%, 20=5.89%, 50=38.67% 00:21:37.569 lat (msec) : 100=24.16%, 250=26.36%, 500=0.56% 00:21:37.569 cpu : usr=0.24%, sys=3.00%, ctx=2062, majf=0, minf=4097 00:21:37.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:37.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.569 issued rwts: total=8619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.569 job2: (groupid=0, jobs=1): err= 0: pid=2319358: Tue Feb 13 08:22:09 2024 00:21:37.569 read: IOPS=644, BW=161MiB/s (169MB/s)(1624MiB/10074msec) 00:21:37.569 slat (usec): min=8, max=118522, avg=1026.54, stdev=5145.51 00:21:37.569 clat (usec): min=1791, max=280185, avg=98117.74, stdev=56084.84 00:21:37.569 lat (usec): min=1822, max=280436, avg=99144.28, stdev=56800.13 00:21:37.569 clat percentiles (msec): 00:21:37.569 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 40], 00:21:37.569 | 30.00th=[ 62], 40.00th=[ 82], 50.00th=[ 100], 60.00th=[ 111], 00:21:37.569 | 70.00th=[ 129], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 190], 00:21:37.569 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 230], 99.95th=[ 232], 00:21:37.569 | 99.99th=[ 279] 00:21:37.569 bw ( KiB/s): min=78848, max=248320, per=7.99%, avg=164659.20, stdev=53585.69, samples=20 00:21:37.569 iops : min= 308, max= 970, avg=643.20, stdev=209.32, samples=20 00:21:37.569 lat (msec) : 2=0.06%, 4=0.72%, 10=2.73%, 20=5.68%, 50=15.64% 00:21:37.569 lat (msec) : 100=25.79%, 250=49.33%, 500=0.05% 00:21:37.569 cpu : usr=0.22%, sys=2.18%, ctx=1889, majf=0, minf=3347 00:21:37.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:37.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.569 issued rwts: total=6495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.569 job3: (groupid=0, jobs=1): err= 0: pid=2319360: Tue Feb 13 08:22:09 2024 00:21:37.569 read: IOPS=698, BW=175MiB/s (183MB/s)(1763MiB/10092msec) 00:21:37.569 slat (usec): min=10, max=159028, avg=1263.48, stdev=5478.53 00:21:37.569 clat (msec): min=2, max=320, avg=90.21, stdev=55.04 00:21:37.570 lat (msec): min=3, max=320, avg=91.48, stdev=55.88 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 39], 00:21:37.570 | 30.00th=[ 47], 40.00th=[ 61], 50.00th=[ 83], 60.00th=[ 107], 00:21:37.570 | 70.00th=[ 117], 80.00th=[ 144], 90.00th=[ 167], 95.00th=[ 190], 00:21:37.570 | 99.00th=[ 224], 99.50th=[ 224], 99.90th=[ 255], 99.95th=[ 268], 00:21:37.570 | 99.99th=[ 321] 00:21:37.570 bw ( KiB/s): min=76288, max=375808, per=8.69%, avg=178918.40, stdev=89105.88, samples=20 00:21:37.570 iops : min= 298, max= 1468, avg=698.90, stdev=348.07, samples=20 00:21:37.570 lat (msec) : 4=0.06%, 10=0.28%, 20=5.56%, 50=28.46%, 100=20.53% 00:21:37.570 lat (msec) : 250=44.99%, 500=0.11% 00:21:37.570 cpu : usr=0.34%, sys=2.75%, ctx=1585, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=7052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job4: (groupid=0, jobs=1): err= 0: pid=2319368: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=560, BW=140MiB/s (147MB/s)(1414MiB/10096msec) 00:21:37.570 slat (usec): min=8, max=171895, avg=1506.05, stdev=6474.66 00:21:37.570 clat (msec): min=5, max=275, avg=112.64, stdev=57.21 00:21:37.570 lat (msec): min=5, max=322, avg=114.15, stdev=58.07 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 47], 00:21:37.570 | 30.00th=[ 96], 40.00th=[ 109], 50.00th=[ 116], 60.00th=[ 131], 00:21:37.570 | 70.00th=[ 144], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 201], 00:21:37.570 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:21:37.570 | 99.99th=[ 275] 00:21:37.570 bw ( KiB/s): min=67584, max=310784, per=6.95%, avg=143129.60, stdev=66637.79, samples=20 00:21:37.570 iops : min= 264, max= 1214, avg=559.10, stdev=260.30, samples=20 00:21:37.570 lat (msec) : 10=1.20%, 20=5.46%, 50=13.90%, 100=11.30%, 250=66.56% 00:21:37.570 lat (msec) : 500=1.57% 00:21:37.570 cpu : usr=0.28%, sys=2.01%, ctx=1333, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=5655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job5: (groupid=0, jobs=1): err= 0: pid=2319376: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=570, BW=143MiB/s (149MB/s)(1439MiB/10093msec) 00:21:37.570 slat (usec): min=10, max=160284, avg=1531.22, stdev=5336.17 00:21:37.570 clat (msec): min=9, max=223, avg=110.59, stdev=47.19 00:21:37.570 lat (msec): min=9, max=309, avg=112.13, stdev=47.92 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 67], 00:21:37.570 | 30.00th=[ 80], 40.00th=[ 103], 50.00th=[ 113], 60.00th=[ 122], 00:21:37.570 | 70.00th=[ 131], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 190], 00:21:37.570 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 224], 00:21:37.570 | 99.99th=[ 224] 00:21:37.570 bw ( KiB/s): min=94208, max=251904, per=7.07%, avg=145729.45, stdev=41432.85, samples=20 00:21:37.570 iops : min= 368, max= 984, avg=569.25, stdev=161.85, samples=20 00:21:37.570 lat (msec) : 10=0.02%, 20=0.75%, 50=11.45%, 100=27.05%, 250=60.73% 00:21:37.570 cpu : usr=0.25%, sys=2.44%, ctx=1320, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=5755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job6: (groupid=0, jobs=1): err= 0: pid=2319377: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=653, BW=163MiB/s (171MB/s)(1649MiB/10095msec) 00:21:37.570 slat (usec): min=8, max=159351, avg=1265.91, stdev=5486.19 00:21:37.570 clat (usec): min=1784, max=305701, avg=96601.91, stdev=59958.79 00:21:37.570 lat (usec): min=1841, max=323174, avg=97867.82, stdev=60923.30 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 36], 00:21:37.570 | 30.00th=[ 57], 40.00th=[ 71], 50.00th=[ 91], 60.00th=[ 113], 00:21:37.570 | 70.00th=[ 124], 80.00th=[ 146], 90.00th=[ 180], 95.00th=[ 209], 00:21:37.570 | 99.00th=[ 251], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:21:37.570 | 99.99th=[ 305] 00:21:37.570 bw ( KiB/s): min=72192, max=476672, per=8.12%, avg=167168.00, stdev=95573.95, samples=20 00:21:37.570 iops : min= 282, max= 1862, avg=653.00, stdev=373.34, samples=20 00:21:37.570 lat (msec) : 2=0.06%, 4=0.26%, 10=3.25%, 20=4.53%, 50=17.47% 00:21:37.570 lat (msec) : 100=27.68%, 250=45.56%, 500=1.20% 00:21:37.570 cpu : usr=0.26%, sys=2.51%, ctx=1729, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job7: (groupid=0, jobs=1): err= 0: pid=2319378: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=1244, BW=311MiB/s (326MB/s)(3115MiB/10014msec) 00:21:37.570 slat (usec): min=9, max=74294, avg=695.21, stdev=2871.70 00:21:37.570 clat (usec): min=1461, max=232439, avg=50690.14, stdev=39954.14 00:21:37.570 lat (usec): min=1494, max=239705, avg=51385.35, stdev=40517.33 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 28], 00:21:37.570 | 30.00th=[ 30], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 40], 00:21:37.570 | 70.00th=[ 51], 80.00th=[ 69], 90.00th=[ 108], 95.00th=[ 144], 00:21:37.570 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 222], 99.95th=[ 232], 00:21:37.570 | 99.99th=[ 232] 00:21:37.570 bw ( KiB/s): min=84992, max=534528, per=15.41%, avg=317401.25, stdev=158160.31, samples=20 00:21:37.570 iops : min= 332, max= 2088, avg=1239.80, stdev=617.88, samples=20 00:21:37.570 lat (msec) : 2=0.02%, 4=0.05%, 10=1.85%, 20=5.71%, 50=62.27% 00:21:37.570 lat (msec) : 100=18.55%, 250=11.56% 00:21:37.570 cpu : usr=0.43%, sys=4.31%, ctx=2503, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=12461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job8: (groupid=0, jobs=1): err= 0: pid=2319379: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=602, BW=151MiB/s (158MB/s)(1518MiB/10074msec) 00:21:37.570 slat (usec): min=10, max=80414, avg=1489.46, stdev=4847.67 00:21:37.570 clat (msec): min=4, max=244, avg=104.63, stdev=45.46 00:21:37.570 lat (msec): min=6, max=244, avg=106.12, stdev=46.24 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 64], 00:21:37.570 | 30.00th=[ 82], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 112], 00:21:37.570 | 70.00th=[ 129], 80.00th=[ 148], 90.00th=[ 169], 95.00th=[ 184], 00:21:37.570 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 222], 99.95th=[ 230], 00:21:37.570 | 99.99th=[ 245] 00:21:37.570 bw ( KiB/s): min=83456, max=283648, per=7.47%, avg=153789.75, stdev=56759.85, samples=20 00:21:37.570 iops : min= 326, max= 1108, avg=600.70, stdev=221.75, samples=20 00:21:37.570 lat (msec) : 10=1.02%, 20=2.70%, 50=8.75%, 100=34.86%, 250=52.67% 00:21:37.570 cpu : usr=0.24%, sys=2.21%, ctx=1431, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=6070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job9: (groupid=0, jobs=1): err= 0: pid=2319380: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=749, BW=187MiB/s (196MB/s)(1887MiB/10076msec) 00:21:37.570 slat (usec): min=11, max=132899, avg=1232.14, stdev=4179.12 00:21:37.570 clat (msec): min=5, max=274, avg=84.12, stdev=50.40 00:21:37.570 lat (msec): min=5, max=274, avg=85.36, stdev=51.09 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 33], 00:21:37.570 | 30.00th=[ 50], 40.00th=[ 67], 50.00th=[ 79], 60.00th=[ 89], 00:21:37.570 | 70.00th=[ 102], 80.00th=[ 121], 90.00th=[ 163], 95.00th=[ 188], 00:21:37.570 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 232], 99.95th=[ 239], 00:21:37.570 | 99.99th=[ 275] 00:21:37.570 bw ( KiB/s): min=76800, max=489472, per=9.30%, avg=191590.40, stdev=108732.80, samples=20 00:21:37.570 iops : min= 300, max= 1912, avg=748.40, stdev=424.74, samples=20 00:21:37.570 lat (msec) : 10=0.76%, 20=2.56%, 50=27.16%, 100=38.33%, 250=31.16% 00:21:37.570 lat (msec) : 500=0.03% 00:21:37.570 cpu : usr=0.33%, sys=3.05%, ctx=1608, majf=0, minf=4097 00:21:37.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:37.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.570 issued rwts: total=7547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.570 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.570 job10: (groupid=0, jobs=1): err= 0: pid=2319381: Tue Feb 13 08:22:09 2024 00:21:37.570 read: IOPS=753, BW=188MiB/s (197MB/s)(1897MiB/10078msec) 00:21:37.570 slat (usec): min=10, max=126402, avg=1116.53, stdev=4278.66 00:21:37.570 clat (msec): min=4, max=281, avg=83.77, stdev=50.14 00:21:37.570 lat (msec): min=4, max=286, avg=84.89, stdev=50.81 00:21:37.570 clat percentiles (msec): 00:21:37.570 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 37], 00:21:37.571 | 30.00th=[ 54], 40.00th=[ 63], 50.00th=[ 78], 60.00th=[ 92], 00:21:37.571 | 70.00th=[ 103], 80.00th=[ 117], 90.00th=[ 161], 95.00th=[ 186], 00:21:37.571 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 253], 99.95th=[ 253], 00:21:37.571 | 99.99th=[ 284] 00:21:37.571 bw ( KiB/s): min=96256, max=365568, per=9.35%, avg=192659.80, stdev=69180.89, samples=20 00:21:37.571 iops : min= 376, max= 1428, avg=752.55, stdev=270.24, samples=20 00:21:37.571 lat (msec) : 10=2.62%, 20=4.35%, 50=20.38%, 100=40.65%, 250=31.89% 00:21:37.571 lat (msec) : 500=0.11% 00:21:37.571 cpu : usr=0.22%, sys=3.11%, ctx=1800, majf=0, minf=4097 00:21:37.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:37.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:37.571 issued rwts: total=7589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.571 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:37.571 00:21:37.571 Run status group 0 (all jobs): 00:21:37.571 READ: bw=2012MiB/s (2109MB/s), 140MiB/s-311MiB/s (147MB/s-326MB/s), io=19.8GiB (21.3GB), run=10014-10096msec 00:21:37.571 00:21:37.571 Disk stats (read/write): 00:21:37.571 nvme0n1: ios=14612/0, merge=0/0, ticks=1236272/0, in_queue=1236272, util=97.36% 00:21:37.571 nvme10n1: ios=17076/0, merge=0/0, ticks=1224953/0, in_queue=1224953, util=97.56% 00:21:37.571 nvme11n1: ios=12847/0, merge=0/0, ticks=1237391/0, in_queue=1237391, util=97.68% 00:21:37.571 nvme2n1: ios=13929/0, merge=0/0, ticks=1226679/0, in_queue=1226679, util=97.83% 00:21:37.571 nvme3n1: ios=11117/0, merge=0/0, ticks=1226863/0, in_queue=1226863, util=97.90% 00:21:37.571 nvme4n1: ios=11315/0, merge=0/0, ticks=1226393/0, in_queue=1226393, util=98.26% 00:21:37.571 nvme5n1: ios=13011/0, merge=0/0, ticks=1231436/0, in_queue=1231436, util=98.38% 00:21:37.571 nvme6n1: ios=24588/0, merge=0/0, ticks=1236404/0, in_queue=1236404, util=98.51% 00:21:37.571 nvme7n1: ios=11962/0, merge=0/0, ticks=1230909/0, in_queue=1230909, util=98.95% 00:21:37.571 nvme8n1: ios=14920/0, merge=0/0, ticks=1229762/0, in_queue=1229762, util=99.11% 00:21:37.571 nvme9n1: ios=14994/0, merge=0/0, ticks=1232452/0, in_queue=1232452, util=99.21% 00:21:37.571 08:22:09 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:37.571 [global] 00:21:37.571 thread=1 00:21:37.571 invalidate=1 00:21:37.571 rw=randwrite 00:21:37.571 time_based=1 00:21:37.571 runtime=10 00:21:37.571 ioengine=libaio 00:21:37.571 direct=1 00:21:37.571 bs=262144 00:21:37.571 iodepth=64 00:21:37.571 norandommap=1 00:21:37.571 numjobs=1 00:21:37.571 00:21:37.571 [job0] 00:21:37.571 filename=/dev/nvme0n1 00:21:37.571 [job1] 00:21:37.571 filename=/dev/nvme10n1 00:21:37.571 [job2] 00:21:37.571 filename=/dev/nvme11n1 00:21:37.571 [job3] 00:21:37.571 filename=/dev/nvme2n1 00:21:37.571 [job4] 00:21:37.571 filename=/dev/nvme3n1 00:21:37.571 [job5] 00:21:37.571 filename=/dev/nvme4n1 00:21:37.571 [job6] 00:21:37.571 filename=/dev/nvme5n1 00:21:37.571 [job7] 00:21:37.571 filename=/dev/nvme6n1 00:21:37.571 [job8] 00:21:37.571 filename=/dev/nvme7n1 00:21:37.571 [job9] 00:21:37.571 filename=/dev/nvme8n1 00:21:37.571 [job10] 00:21:37.571 filename=/dev/nvme9n1 00:21:37.571 Could not set queue depth (nvme0n1) 00:21:37.571 Could not set queue depth (nvme10n1) 00:21:37.571 Could not set queue depth (nvme11n1) 00:21:37.571 Could not set queue depth (nvme2n1) 00:21:37.571 Could not set queue depth (nvme3n1) 00:21:37.571 Could not set queue depth (nvme4n1) 00:21:37.571 Could not set queue depth (nvme5n1) 00:21:37.571 Could not set queue depth (nvme6n1) 00:21:37.571 Could not set queue depth (nvme7n1) 00:21:37.571 Could not set queue depth (nvme8n1) 00:21:37.571 Could not set queue depth (nvme9n1) 00:21:37.571 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:37.571 fio-3.35 00:21:37.571 Starting 11 threads 00:21:47.549 00:21:47.549 job0: (groupid=0, jobs=1): err= 0: pid=2320914: Tue Feb 13 08:22:20 2024 00:21:47.549 write: IOPS=518, BW=130MiB/s (136MB/s)(1301MiB/10048msec); 0 zone resets 00:21:47.549 slat (usec): min=20, max=94317, avg=1564.64, stdev=4265.98 00:21:47.549 clat (msec): min=4, max=320, avg=121.95, stdev=77.24 00:21:47.549 lat (msec): min=4, max=320, avg=123.52, stdev=78.38 00:21:47.549 clat percentiles (msec): 00:21:47.549 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 40], 20.00th=[ 47], 00:21:47.549 | 30.00th=[ 56], 40.00th=[ 90], 50.00th=[ 115], 60.00th=[ 127], 00:21:47.549 | 70.00th=[ 161], 80.00th=[ 207], 90.00th=[ 236], 95.00th=[ 253], 00:21:47.549 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 321], 00:21:47.549 | 99.99th=[ 321] 00:21:47.549 bw ( KiB/s): min=53248, max=320000, per=9.11%, avg=131635.20, stdev=69718.12, samples=20 00:21:47.549 iops : min= 208, max= 1250, avg=514.20, stdev=272.34, samples=20 00:21:47.549 lat (msec) : 10=1.34%, 20=3.36%, 50=21.98%, 100=16.98%, 250=50.53% 00:21:47.549 lat (msec) : 500=5.80% 00:21:47.550 cpu : usr=1.35%, sys=1.63%, ctx=2488, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,5205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job1: (groupid=0, jobs=1): err= 0: pid=2320926: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=484, BW=121MiB/s (127MB/s)(1220MiB/10067msec); 0 zone resets 00:21:47.550 slat (usec): min=23, max=133808, avg=1493.29, stdev=4357.71 00:21:47.550 clat (msec): min=2, max=281, avg=130.48, stdev=71.61 00:21:47.550 lat (msec): min=3, max=284, avg=131.97, stdev=72.55 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 53], 00:21:47.550 | 30.00th=[ 78], 40.00th=[ 116], 50.00th=[ 126], 60.00th=[ 146], 00:21:47.550 | 70.00th=[ 182], 80.00th=[ 205], 90.00th=[ 226], 95.00th=[ 241], 00:21:47.550 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 284], 00:21:47.550 | 99.99th=[ 284] 00:21:47.550 bw ( KiB/s): min=67584, max=239104, per=8.54%, avg=123302.00, stdev=54379.25, samples=20 00:21:47.550 iops : min= 264, max= 934, avg=481.60, stdev=212.42, samples=20 00:21:47.550 lat (msec) : 4=0.08%, 10=0.57%, 20=2.95%, 50=14.90%, 100=17.22% 00:21:47.550 lat (msec) : 250=61.30%, 500=2.97% 00:21:47.550 cpu : usr=0.99%, sys=1.73%, ctx=2601, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,4879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job2: (groupid=0, jobs=1): err= 0: pid=2320927: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=361, BW=90.3MiB/s (94.7MB/s)(920MiB/10190msec); 0 zone resets 00:21:47.550 slat (usec): min=26, max=47974, avg=2430.72, stdev=5060.09 00:21:47.550 clat (msec): min=3, max=408, avg=174.62, stdev=58.58 00:21:47.550 lat (msec): min=5, max=408, avg=177.05, stdev=59.39 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 18], 5.00th=[ 59], 10.00th=[ 94], 20.00th=[ 123], 00:21:47.550 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 199], 00:21:47.550 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 234], 95.00th=[ 245], 00:21:47.550 | 99.00th=[ 288], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 409], 00:21:47.550 | 99.99th=[ 409] 00:21:47.550 bw ( KiB/s): min=67584, max=159744, per=6.41%, avg=92595.20, stdev=27471.92, samples=20 00:21:47.550 iops : min= 264, max= 624, avg=361.70, stdev=107.31, samples=20 00:21:47.550 lat (msec) : 4=0.03%, 10=0.30%, 20=1.03%, 50=2.91%, 100=8.23% 00:21:47.550 lat (msec) : 250=84.81%, 500=2.69% 00:21:47.550 cpu : usr=1.11%, sys=1.32%, ctx=1417, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,3680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job3: (groupid=0, jobs=1): err= 0: pid=2320928: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=770, BW=193MiB/s (202MB/s)(1964MiB/10200msec); 0 zone resets 00:21:47.550 slat (usec): min=19, max=96586, avg=841.13, stdev=2426.83 00:21:47.550 clat (usec): min=1611, max=425866, avg=82221.24, stdev=50520.88 00:21:47.550 lat (usec): min=1649, max=425912, avg=83062.37, stdev=50874.09 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 46], 00:21:47.550 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 72], 60.00th=[ 83], 00:21:47.550 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 157], 95.00th=[ 184], 00:21:47.550 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 401], 99.95th=[ 414], 00:21:47.550 | 99.99th=[ 426] 00:21:47.550 bw ( KiB/s): min=89088, max=333824, per=13.81%, avg=199489.55, stdev=78768.89, samples=20 00:21:47.550 iops : min= 348, max= 1304, avg=779.25, stdev=307.70, samples=20 00:21:47.550 lat (msec) : 2=0.06%, 4=0.10%, 10=1.08%, 20=3.34%, 50=28.75% 00:21:47.550 lat (msec) : 100=37.15%, 250=28.57%, 500=0.95% 00:21:47.550 cpu : usr=1.39%, sys=2.37%, ctx=4132, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,7855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job4: (groupid=0, jobs=1): err= 0: pid=2320929: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=430, BW=108MiB/s (113MB/s)(1097MiB/10190msec); 0 zone resets 00:21:47.550 slat (usec): min=25, max=53393, avg=1905.69, stdev=4639.90 00:21:47.550 clat (msec): min=2, max=422, avg=146.62, stdev=78.69 00:21:47.550 lat (msec): min=2, max=422, avg=148.52, stdev=79.88 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 64], 00:21:47.550 | 30.00th=[ 102], 40.00th=[ 125], 50.00th=[ 144], 60.00th=[ 180], 00:21:47.550 | 70.00th=[ 203], 80.00th=[ 224], 90.00th=[ 245], 95.00th=[ 259], 00:21:47.550 | 99.00th=[ 279], 99.50th=[ 359], 99.90th=[ 409], 99.95th=[ 409], 00:21:47.550 | 99.99th=[ 422] 00:21:47.550 bw ( KiB/s): min=61440, max=299008, per=7.66%, avg=110694.40, stdev=62830.16, samples=20 00:21:47.550 iops : min= 240, max= 1168, avg=432.40, stdev=245.43, samples=20 00:21:47.550 lat (msec) : 4=0.14%, 10=1.57%, 20=4.67%, 50=9.21%, 100=13.52% 00:21:47.550 lat (msec) : 250=63.39%, 500=7.50% 00:21:47.550 cpu : usr=1.21%, sys=1.30%, ctx=2142, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,4387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job5: (groupid=0, jobs=1): err= 0: pid=2320930: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(879MiB/10191msec); 0 zone resets 00:21:47.550 slat (usec): min=27, max=48368, avg=2616.86, stdev=5356.49 00:21:47.550 clat (msec): min=5, max=404, avg=182.48, stdev=59.65 00:21:47.550 lat (msec): min=5, max=404, avg=185.10, stdev=60.53 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 23], 5.00th=[ 58], 10.00th=[ 92], 20.00th=[ 140], 00:21:47.550 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 205], 00:21:47.550 | 70.00th=[ 215], 80.00th=[ 228], 90.00th=[ 245], 95.00th=[ 257], 00:21:47.550 | 99.00th=[ 284], 99.50th=[ 355], 99.90th=[ 393], 99.95th=[ 405], 00:21:47.550 | 99.99th=[ 405] 00:21:47.550 bw ( KiB/s): min=63488, max=185344, per=6.11%, avg=88320.00, stdev=28110.47, samples=20 00:21:47.550 iops : min= 248, max= 724, avg=345.00, stdev=109.81, samples=20 00:21:47.550 lat (msec) : 10=0.20%, 20=0.60%, 50=2.90%, 100=7.09%, 250=81.22% 00:21:47.550 lat (msec) : 500=8.00% 00:21:47.550 cpu : usr=1.33%, sys=1.21%, ctx=1329, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,3514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job6: (groupid=0, jobs=1): err= 0: pid=2320931: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=620, BW=155MiB/s (163MB/s)(1562MiB/10073msec); 0 zone resets 00:21:47.550 slat (usec): min=26, max=50438, avg=1373.90, stdev=3103.52 00:21:47.550 clat (msec): min=3, max=289, avg=101.73, stdev=44.89 00:21:47.550 lat (msec): min=4, max=293, avg=103.10, stdev=45.45 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 14], 5.00th=[ 37], 10.00th=[ 51], 20.00th=[ 71], 00:21:47.550 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 102], 60.00th=[ 107], 00:21:47.550 | 70.00th=[ 117], 80.00th=[ 128], 90.00th=[ 155], 95.00th=[ 194], 00:21:47.550 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 284], 00:21:47.550 | 99.99th=[ 288] 00:21:47.550 bw ( KiB/s): min=73728, max=233984, per=10.96%, avg=158336.00, stdev=48661.38, samples=20 00:21:47.550 iops : min= 288, max= 914, avg=618.50, stdev=190.08, samples=20 00:21:47.550 lat (msec) : 4=0.02%, 10=0.50%, 20=1.79%, 50=7.55%, 100=38.51% 00:21:47.550 lat (msec) : 250=51.12%, 500=0.51% 00:21:47.550 cpu : usr=1.52%, sys=2.06%, ctx=2421, majf=0, minf=1 00:21:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.550 issued rwts: total=0,6248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.550 job7: (groupid=0, jobs=1): err= 0: pid=2320932: Tue Feb 13 08:22:20 2024 00:21:47.550 write: IOPS=772, BW=193MiB/s (203MB/s)(1969MiB/10192msec); 0 zone resets 00:21:47.550 slat (usec): min=23, max=67174, avg=957.09, stdev=2683.29 00:21:47.550 clat (msec): min=5, max=409, avg=81.83, stdev=56.47 00:21:47.550 lat (msec): min=6, max=409, avg=82.78, stdev=57.01 00:21:47.550 clat percentiles (msec): 00:21:47.550 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 45], 20.00th=[ 48], 00:21:47.550 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 69], 00:21:47.550 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 163], 95.00th=[ 213], 00:21:47.550 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 384], 99.95th=[ 397], 00:21:47.550 | 99.99th=[ 409] 00:21:47.550 bw ( KiB/s): min=82944, max=336896, per=13.85%, avg=200005.30, stdev=86830.99, samples=20 00:21:47.550 iops : min= 324, max= 1316, avg=781.25, stdev=339.19, samples=20 00:21:47.550 lat (msec) : 10=0.27%, 20=2.72%, 50=29.76%, 100=39.72%, 250=25.36% 00:21:47.551 lat (msec) : 500=2.18% 00:21:47.551 cpu : usr=1.90%, sys=2.43%, ctx=3526, majf=0, minf=1 00:21:47.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:47.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.551 issued rwts: total=0,7876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.551 job8: (groupid=0, jobs=1): err= 0: pid=2320935: Tue Feb 13 08:22:20 2024 00:21:47.551 write: IOPS=598, BW=150MiB/s (157MB/s)(1524MiB/10187msec); 0 zone resets 00:21:47.551 slat (usec): min=20, max=39367, avg=1395.30, stdev=3287.32 00:21:47.551 clat (msec): min=3, max=418, avg=105.54, stdev=62.99 00:21:47.551 lat (msec): min=3, max=418, avg=106.93, stdev=63.80 00:21:47.551 clat percentiles (msec): 00:21:47.551 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 50], 00:21:47.551 | 30.00th=[ 55], 40.00th=[ 72], 50.00th=[ 100], 60.00th=[ 115], 00:21:47.551 | 70.00th=[ 136], 80.00th=[ 167], 90.00th=[ 190], 95.00th=[ 220], 00:21:47.551 | 99.00th=[ 247], 99.50th=[ 305], 99.90th=[ 393], 99.95th=[ 405], 00:21:47.551 | 99.99th=[ 418] 00:21:47.551 bw ( KiB/s): min=70144, max=327680, per=10.69%, avg=154406.95, stdev=79638.54, samples=20 00:21:47.551 iops : min= 274, max= 1280, avg=603.15, stdev=311.09, samples=20 00:21:47.551 lat (msec) : 4=0.05%, 10=1.03%, 20=2.69%, 50=17.69%, 100=28.78% 00:21:47.551 lat (msec) : 250=48.90%, 500=0.85% 00:21:47.551 cpu : usr=1.29%, sys=1.88%, ctx=2619, majf=0, minf=1 00:21:47.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:47.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.551 issued rwts: total=0,6094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.551 job9: (groupid=0, jobs=1): err= 0: pid=2320936: Tue Feb 13 08:22:20 2024 00:21:47.551 write: IOPS=452, BW=113MiB/s (119MB/s)(1153MiB/10191msec); 0 zone resets 00:21:47.551 slat (usec): min=24, max=41960, avg=1527.72, stdev=3884.02 00:21:47.551 clat (msec): min=3, max=362, avg=139.81, stdev=66.68 00:21:47.551 lat (msec): min=6, max=362, avg=141.33, stdev=67.40 00:21:47.551 clat percentiles (msec): 00:21:47.551 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 77], 00:21:47.551 | 30.00th=[ 110], 40.00th=[ 125], 50.00th=[ 132], 60.00th=[ 159], 00:21:47.551 | 70.00th=[ 190], 80.00th=[ 205], 90.00th=[ 222], 95.00th=[ 236], 00:21:47.551 | 99.00th=[ 271], 99.50th=[ 309], 99.90th=[ 355], 99.95th=[ 359], 00:21:47.551 | 99.99th=[ 363] 00:21:47.551 bw ( KiB/s): min=70656, max=224768, per=8.06%, avg=116414.75, stdev=40790.47, samples=20 00:21:47.551 iops : min= 276, max= 878, avg=454.70, stdev=159.34, samples=20 00:21:47.551 lat (msec) : 4=0.02%, 10=0.24%, 20=3.73%, 50=8.18%, 100=15.07% 00:21:47.551 lat (msec) : 250=70.61%, 500=2.15% 00:21:47.551 cpu : usr=1.09%, sys=1.42%, ctx=2527, majf=0, minf=1 00:21:47.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:47.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.551 issued rwts: total=0,4611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.551 job10: (groupid=0, jobs=1): err= 0: pid=2320937: Tue Feb 13 08:22:20 2024 00:21:47.551 write: IOPS=314, BW=78.6MiB/s (82.5MB/s)(801MiB/10189msec); 0 zone resets 00:21:47.551 slat (usec): min=30, max=104082, avg=2874.58, stdev=6133.76 00:21:47.551 clat (msec): min=4, max=439, avg=200.07, stdev=58.05 00:21:47.551 lat (msec): min=4, max=439, avg=202.94, stdev=58.75 00:21:47.551 clat percentiles (msec): 00:21:47.551 | 1.00th=[ 12], 5.00th=[ 72], 10.00th=[ 138], 20.00th=[ 174], 00:21:47.551 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 203], 60.00th=[ 218], 00:21:47.551 | 70.00th=[ 230], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 275], 00:21:47.551 | 99.00th=[ 300], 99.50th=[ 334], 99.90th=[ 426], 99.95th=[ 439], 00:21:47.551 | 99.99th=[ 439] 00:21:47.551 bw ( KiB/s): min=57856, max=115200, per=5.57%, avg=80444.80, stdev=15818.93, samples=20 00:21:47.551 iops : min= 226, max= 450, avg=314.20, stdev=61.75, samples=20 00:21:47.551 lat (msec) : 10=0.90%, 20=0.94%, 50=1.44%, 100=3.96%, 250=75.35% 00:21:47.551 lat (msec) : 500=17.41% 00:21:47.551 cpu : usr=1.01%, sys=0.96%, ctx=1141, majf=0, minf=1 00:21:47.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:21:47.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.551 issued rwts: total=0,3205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.551 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.551 00:21:47.551 Run status group 0 (all jobs): 00:21:47.551 WRITE: bw=1411MiB/s (1479MB/s), 78.6MiB/s-193MiB/s (82.5MB/s-203MB/s), io=14.1GiB (15.1GB), run=10048-10200msec 00:21:47.551 00:21:47.551 Disk stats (read/write): 00:21:47.551 nvme0n1: ios=48/10095, merge=0/0, ticks=3117/1218867, in_queue=1221984, util=100.00% 00:21:47.551 nvme10n1: ios=49/9491, merge=0/0, ticks=2251/1211540, in_queue=1213791, util=100.00% 00:21:47.551 nvme11n1: ios=45/7346, merge=0/0, ticks=1931/1235027, in_queue=1236958, util=100.00% 00:21:47.551 nvme2n1: ios=0/15683, merge=0/0, ticks=0/1247627, in_queue=1247627, util=97.65% 00:21:47.551 nvme3n1: ios=49/8761, merge=0/0, ticks=879/1239654, in_queue=1240533, util=99.99% 00:21:47.551 nvme4n1: ios=54/7013, merge=0/0, ticks=1084/1232564, in_queue=1233648, util=100.00% 00:21:47.551 nvme5n1: ios=44/12256, merge=0/0, ticks=1280/1207619, in_queue=1208899, util=100.00% 00:21:47.551 nvme6n1: ios=0/15734, merge=0/0, ticks=0/1244075, in_queue=1244075, util=98.32% 00:21:47.551 nvme7n1: ios=45/12177, merge=0/0, ticks=109/1239733, in_queue=1239842, util=99.25% 00:21:47.551 nvme8n1: ios=43/9207, merge=0/0, ticks=988/1246853, in_queue=1247841, util=100.00% 00:21:47.551 nvme9n1: ios=44/6397, merge=0/0, ticks=1734/1228297, in_queue=1230031, util=100.00% 00:21:47.551 08:22:20 -- target/multiconnection.sh@36 -- # sync 00:21:47.551 08:22:20 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:47.551 08:22:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.551 08:22:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:47.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:47.551 08:22:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:47.551 08:22:20 -- common/autotest_common.sh@1196 -- # local i=0 00:21:47.551 08:22:20 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:47.551 08:22:20 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK1 00:21:47.551 08:22:20 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:47.551 08:22:20 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK1 00:21:47.551 08:22:20 -- common/autotest_common.sh@1208 -- # return 0 00:21:47.551 08:22:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.551 08:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.551 08:22:20 -- common/autotest_common.sh@10 -- # set +x 00:21:47.551 08:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.551 08:22:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.551 08:22:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:47.810 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:47.810 08:22:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:47.810 08:22:21 -- common/autotest_common.sh@1196 -- # local i=0 00:21:47.810 08:22:21 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:47.810 08:22:21 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK2 00:21:47.810 08:22:21 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:47.810 08:22:21 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK2 00:21:47.810 08:22:21 -- common/autotest_common.sh@1208 -- # return 0 00:21:47.810 08:22:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:47.810 08:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.810 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:47.810 08:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.810 08:22:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.810 08:22:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:48.069 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:48.069 08:22:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:48.069 08:22:21 -- common/autotest_common.sh@1196 -- # local i=0 00:21:48.069 08:22:21 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:48.069 08:22:21 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK3 00:21:48.069 08:22:21 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK3 00:21:48.069 08:22:21 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:48.069 08:22:21 -- common/autotest_common.sh@1208 -- # return 0 00:21:48.069 08:22:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:48.069 08:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.069 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:48.069 08:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.069 08:22:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.069 08:22:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:48.327 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:48.327 08:22:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:48.327 08:22:21 -- common/autotest_common.sh@1196 -- # local i=0 00:21:48.327 08:22:21 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:48.327 08:22:21 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK4 00:21:48.327 08:22:21 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK4 00:21:48.327 08:22:21 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:48.327 08:22:21 -- common/autotest_common.sh@1208 -- # return 0 00:21:48.327 08:22:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:48.327 08:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.328 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:21:48.328 08:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.328 08:22:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.328 08:22:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:48.586 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:48.586 08:22:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:48.586 08:22:22 -- common/autotest_common.sh@1196 -- # local i=0 00:21:48.586 08:22:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:48.586 08:22:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK5 00:21:48.586 08:22:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:48.586 08:22:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK5 00:21:48.586 08:22:22 -- common/autotest_common.sh@1208 -- # return 0 00:21:48.586 08:22:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:48.586 08:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.586 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:48.586 08:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.586 08:22:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.586 08:22:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:48.844 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:48.844 08:22:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:48.844 08:22:22 -- common/autotest_common.sh@1196 -- # local i=0 00:21:48.844 08:22:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:48.844 08:22:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK6 00:21:48.844 08:22:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:48.844 08:22:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK6 00:21:48.844 08:22:22 -- common/autotest_common.sh@1208 -- # return 0 00:21:48.844 08:22:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:48.844 08:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.844 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:48.844 08:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.844 08:22:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.844 08:22:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:49.103 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:49.103 08:22:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:49.103 08:22:22 -- common/autotest_common.sh@1196 -- # local i=0 00:21:49.103 08:22:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:49.103 08:22:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK7 00:21:49.103 08:22:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:49.103 08:22:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK7 00:21:49.103 08:22:22 -- common/autotest_common.sh@1208 -- # return 0 00:21:49.103 08:22:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:49.103 08:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.103 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:49.103 08:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.103 08:22:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.103 08:22:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:49.361 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:49.361 08:22:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:49.361 08:22:22 -- common/autotest_common.sh@1196 -- # local i=0 00:21:49.361 08:22:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:49.361 08:22:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK8 00:21:49.361 08:22:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:49.361 08:22:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK8 00:21:49.361 08:22:22 -- common/autotest_common.sh@1208 -- # return 0 00:21:49.361 08:22:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:49.361 08:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.361 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:49.361 08:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.361 08:22:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.361 08:22:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:49.361 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:49.361 08:22:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:49.361 08:22:22 -- common/autotest_common.sh@1196 -- # local i=0 00:21:49.361 08:22:22 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:49.361 08:22:22 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK9 00:21:49.361 08:22:22 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:49.361 08:22:22 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK9 00:21:49.361 08:22:22 -- common/autotest_common.sh@1208 -- # return 0 00:21:49.361 08:22:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:49.361 08:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.361 08:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:49.361 08:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.361 08:22:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.361 08:22:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:49.620 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:49.620 08:22:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:49.620 08:22:23 -- common/autotest_common.sh@1196 -- # local i=0 00:21:49.620 08:22:23 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:49.620 08:22:23 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK10 00:21:49.620 08:22:23 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:49.620 08:22:23 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK10 00:21:49.620 08:22:23 -- common/autotest_common.sh@1208 -- # return 0 00:21:49.620 08:22:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:49.620 08:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.620 08:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:49.620 08:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.620 08:22:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.620 08:22:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:49.620 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:49.620 08:22:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:49.620 08:22:23 -- common/autotest_common.sh@1196 -- # local i=0 00:21:49.620 08:22:23 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:21:49.620 08:22:23 -- common/autotest_common.sh@1197 -- # grep -q -w SPDK11 00:21:49.620 08:22:23 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:21:49.620 08:22:23 -- common/autotest_common.sh@1204 -- # grep -q -w SPDK11 00:21:49.620 08:22:23 -- common/autotest_common.sh@1208 -- # return 0 00:21:49.620 08:22:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:49.620 08:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.620 08:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:49.620 08:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.620 08:22:23 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:49.620 08:22:23 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:49.620 08:22:23 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:49.620 08:22:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:49.620 08:22:23 -- nvmf/common.sh@116 -- # sync 00:21:49.620 08:22:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:49.620 08:22:23 -- nvmf/common.sh@119 -- # set +e 00:21:49.620 08:22:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:49.620 08:22:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:49.620 rmmod nvme_tcp 00:21:49.620 rmmod nvme_fabrics 00:21:49.620 rmmod nvme_keyring 00:21:49.879 08:22:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:49.879 08:22:23 -- nvmf/common.sh@123 -- # set -e 00:21:49.879 08:22:23 -- nvmf/common.sh@124 -- # return 0 00:21:49.879 08:22:23 -- nvmf/common.sh@477 -- # '[' -n 2312463 ']' 00:21:49.879 08:22:23 -- nvmf/common.sh@478 -- # killprocess 2312463 00:21:49.879 08:22:23 -- common/autotest_common.sh@924 -- # '[' -z 2312463 ']' 00:21:49.879 08:22:23 -- common/autotest_common.sh@928 -- # kill -0 2312463 00:21:49.879 08:22:23 -- common/autotest_common.sh@929 -- # uname 00:21:49.879 08:22:23 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:49.879 08:22:23 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2312463 00:21:49.879 08:22:23 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:49.879 08:22:23 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:49.879 08:22:23 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2312463' 00:21:49.879 killing process with pid 2312463 00:21:49.879 08:22:23 -- common/autotest_common.sh@943 -- # kill 2312463 00:21:49.879 08:22:23 -- common/autotest_common.sh@948 -- # wait 2312463 00:21:50.138 08:22:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:50.138 08:22:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:50.138 08:22:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:50.138 08:22:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.138 08:22:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:50.138 08:22:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.138 08:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.138 08:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.673 08:22:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:52.673 00:21:52.673 real 1m11.626s 00:21:52.673 user 4m15.683s 00:21:52.673 sys 0m22.988s 00:21:52.673 08:22:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:52.673 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:21:52.673 ************************************ 00:21:52.673 END TEST nvmf_multiconnection 00:21:52.673 ************************************ 00:21:52.673 08:22:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:52.673 08:22:25 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:21:52.673 08:22:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:52.673 08:22:25 -- common/autotest_common.sh@10 -- # set +x 00:21:52.673 ************************************ 00:21:52.673 START TEST nvmf_initiator_timeout 00:21:52.673 ************************************ 00:21:52.673 08:22:25 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:52.673 * Looking for test storage... 00:21:52.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.673 08:22:25 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.673 08:22:25 -- nvmf/common.sh@7 -- # uname -s 00:21:52.673 08:22:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.673 08:22:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.673 08:22:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.673 08:22:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.673 08:22:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.673 08:22:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.673 08:22:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.673 08:22:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.673 08:22:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.673 08:22:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.673 08:22:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:52.673 08:22:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:52.673 08:22:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.673 08:22:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.673 08:22:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.673 08:22:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.673 08:22:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.673 08:22:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.673 08:22:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.673 08:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.673 08:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.673 08:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.673 08:22:26 -- paths/export.sh@5 -- # export PATH 00:21:52.674 08:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.674 08:22:26 -- nvmf/common.sh@46 -- # : 0 00:21:52.674 08:22:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:52.674 08:22:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:52.674 08:22:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:52.674 08:22:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.674 08:22:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.674 08:22:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:52.674 08:22:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:52.674 08:22:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:52.674 08:22:26 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.674 08:22:26 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.674 08:22:26 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:52.674 08:22:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:52.674 08:22:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.674 08:22:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:52.674 08:22:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:52.674 08:22:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:52.674 08:22:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.674 08:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.674 08:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.674 08:22:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:52.674 08:22:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:52.674 08:22:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:52.674 08:22:26 -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 08:22:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:57.941 08:22:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:57.941 08:22:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:57.941 08:22:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:57.941 08:22:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:57.941 08:22:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:57.941 08:22:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:57.941 08:22:31 -- nvmf/common.sh@294 -- # net_devs=() 00:21:57.941 08:22:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:57.941 08:22:31 -- nvmf/common.sh@295 -- # e810=() 00:21:57.941 08:22:31 -- nvmf/common.sh@295 -- # local -ga e810 00:21:57.941 08:22:31 -- nvmf/common.sh@296 -- # x722=() 00:21:57.941 08:22:31 -- nvmf/common.sh@296 -- # local -ga x722 00:21:57.941 08:22:31 -- nvmf/common.sh@297 -- # mlx=() 00:21:57.941 08:22:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:57.941 08:22:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.941 08:22:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:57.941 08:22:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:57.941 08:22:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:57.941 08:22:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.941 08:22:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:57.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:57.941 08:22:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.941 08:22:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.942 08:22:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:57.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:57.942 08:22:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:57.942 08:22:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.942 08:22:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.942 08:22:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.942 08:22:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.942 08:22:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:57.942 Found net devices under 0000:af:00.0: cvl_0_0 00:21:57.942 08:22:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.942 08:22:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.942 08:22:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.942 08:22:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.942 08:22:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.942 08:22:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:57.942 Found net devices under 0000:af:00.1: cvl_0_1 00:21:57.942 08:22:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.942 08:22:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:57.942 08:22:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:57.942 08:22:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:57.942 08:22:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.942 08:22:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.942 08:22:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.942 08:22:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:57.942 08:22:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.942 08:22:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.942 08:22:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:57.942 08:22:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.942 08:22:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.942 08:22:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:57.942 08:22:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:57.942 08:22:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.942 08:22:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.942 08:22:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.942 08:22:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.942 08:22:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:57.942 08:22:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.942 08:22:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.942 08:22:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.942 08:22:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:57.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:21:57.942 00:21:57.942 --- 10.0.0.2 ping statistics --- 00:21:57.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.942 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:21:57.942 08:22:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:21:57.942 00:21:57.942 --- 10.0.0.1 ping statistics --- 00:21:57.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.942 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:57.942 08:22:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.942 08:22:31 -- nvmf/common.sh@410 -- # return 0 00:21:57.942 08:22:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.942 08:22:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.942 08:22:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.942 08:22:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.942 08:22:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.942 08:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.942 08:22:31 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:57.942 08:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.942 08:22:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:57.942 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:57.942 08:22:31 -- nvmf/common.sh@469 -- # nvmfpid=2326654 00:21:57.942 08:22:31 -- nvmf/common.sh@470 -- # waitforlisten 2326654 00:21:57.942 08:22:31 -- common/autotest_common.sh@817 -- # '[' -z 2326654 ']' 00:21:57.942 08:22:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.942 08:22:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.942 08:22:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.942 08:22:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.942 08:22:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.942 08:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:57.942 [2024-02-13 08:22:31.382590] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:21:57.942 [2024-02-13 08:22:31.382635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.942 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.942 [2024-02-13 08:22:31.443983] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.942 [2024-02-13 08:22:31.520656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.942 [2024-02-13 08:22:31.520761] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.942 [2024-02-13 08:22:31.520769] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.942 [2024-02-13 08:22:31.520775] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.942 [2024-02-13 08:22:31.520809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.942 [2024-02-13 08:22:31.520908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.942 [2024-02-13 08:22:31.520926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.942 [2024-02-13 08:22:31.520926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.506 08:22:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:58.506 08:22:32 -- common/autotest_common.sh@850 -- # return 0 00:21:58.506 08:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.506 08:22:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.506 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.763 08:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.763 08:22:32 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:58.763 08:22:32 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.763 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.763 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.763 Malloc0 00:21:58.763 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.763 08:22:32 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:58.763 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.763 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.763 Delay0 00:21:58.763 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.764 08:22:32 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.764 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.764 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 [2024-02-13 08:22:32.258899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.764 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.764 08:22:32 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:58.764 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.764 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.764 08:22:32 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:58.764 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.764 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.764 08:22:32 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.764 08:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.764 08:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 [2024-02-13 08:22:32.283809] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.764 08:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.764 08:22:32 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:00.134 08:22:33 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:00.134 08:22:33 -- common/autotest_common.sh@1175 -- # local i=0 00:22:00.134 08:22:33 -- common/autotest_common.sh@1176 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.134 08:22:33 -- common/autotest_common.sh@1177 -- # [[ -n '' ]] 00:22:00.134 08:22:33 -- common/autotest_common.sh@1182 -- # sleep 2 00:22:02.060 08:22:35 -- common/autotest_common.sh@1183 -- # (( i++ <= 15 )) 00:22:02.060 08:22:35 -- common/autotest_common.sh@1184 -- # lsblk -l -o NAME,SERIAL 00:22:02.060 08:22:35 -- common/autotest_common.sh@1184 -- # grep -c SPDKISFASTANDAWESOME 00:22:02.060 08:22:35 -- common/autotest_common.sh@1184 -- # nvme_devices=1 00:22:02.060 08:22:35 -- common/autotest_common.sh@1185 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.060 08:22:35 -- common/autotest_common.sh@1185 -- # return 0 00:22:02.060 08:22:35 -- target/initiator_timeout.sh@35 -- # fio_pid=2327375 00:22:02.060 08:22:35 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:02.060 08:22:35 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:02.060 [global] 00:22:02.060 thread=1 00:22:02.060 invalidate=1 00:22:02.060 rw=write 00:22:02.060 time_based=1 00:22:02.060 runtime=60 00:22:02.060 ioengine=libaio 00:22:02.060 direct=1 00:22:02.060 bs=4096 00:22:02.060 iodepth=1 00:22:02.060 norandommap=0 00:22:02.060 numjobs=1 00:22:02.060 00:22:02.060 verify_dump=1 00:22:02.060 verify_backlog=512 00:22:02.060 verify_state_save=0 00:22:02.060 do_verify=1 00:22:02.060 verify=crc32c-intel 00:22:02.060 [job0] 00:22:02.060 filename=/dev/nvme0n1 00:22:02.060 Could not set queue depth (nvme0n1) 00:22:02.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:02.337 fio-3.35 00:22:02.337 Starting 1 thread 00:22:04.861 08:22:38 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:04.861 08:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.861 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:22:04.861 true 00:22:04.861 08:22:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.861 08:22:38 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:04.861 08:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.861 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:22:04.861 true 00:22:04.861 08:22:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.861 08:22:38 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:04.861 08:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.861 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:22:04.861 true 00:22:04.861 08:22:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.861 08:22:38 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:04.861 08:22:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.861 08:22:38 -- common/autotest_common.sh@10 -- # set +x 00:22:05.119 true 00:22:05.119 08:22:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.119 08:22:38 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:08.396 08:22:41 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:08.396 08:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.396 08:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 true 00:22:08.396 08:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.396 08:22:41 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:08.396 08:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.396 08:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 true 00:22:08.396 08:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.396 08:22:41 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:08.396 08:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.396 08:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 true 00:22:08.397 08:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.397 08:22:41 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:08.397 08:22:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.397 08:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.397 true 00:22:08.397 08:22:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.397 08:22:41 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:08.397 08:22:41 -- target/initiator_timeout.sh@54 -- # wait 2327375 00:23:04.607 00:23:04.607 job0: (groupid=0, jobs=1): err= 0: pid=2327500: Tue Feb 13 08:23:35 2024 00:23:04.607 read: IOPS=81, BW=327KiB/s (335kB/s)(19.2MiB/60008msec) 00:23:04.607 slat (usec): min=6, max=10602, avg=16.29, stdev=182.20 00:23:04.607 clat (usec): min=283, max=41793k, avg=11898.29, stdev=596414.76 00:23:04.607 lat (usec): min=291, max=41793k, avg=11914.58, stdev=596414.92 00:23:04.607 clat percentiles (usec): 00:23:04.607 | 1.00th=[ 363], 5.00th=[ 396], 10.00th=[ 457], 00:23:04.607 | 20.00th=[ 494], 30.00th=[ 506], 40.00th=[ 519], 00:23:04.607 | 50.00th=[ 570], 60.00th=[ 635], 70.00th=[ 693], 00:23:04.607 | 80.00th=[ 717], 90.00th=[ 734], 95.00th=[ 41157], 00:23:04.607 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:23:04.607 | 99.95th=[ 42730], 99.99th=[17112761] 00:23:04.607 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60008msec); 0 zone resets 00:23:04.607 slat (usec): min=9, max=30476, avg=17.06, stdev=425.77 00:23:04.607 clat (usec): min=195, max=1346, avg=267.85, stdev=43.91 00:23:04.607 lat (usec): min=207, max=31093, avg=284.91, stdev=432.89 00:23:04.607 clat percentiles (usec): 00:23:04.607 | 1.00th=[ 210], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:23:04.607 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:23:04.607 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 322], 95.00th=[ 351], 00:23:04.607 | 99.00th=[ 416], 99.50th=[ 453], 99.90th=[ 619], 99.95th=[ 742], 00:23:04.607 | 99.99th=[ 1352] 00:23:04.607 bw ( KiB/s): min= 4096, max= 7072, per=100.00%, avg=5120.00, stdev=1164.76, samples=8 00:23:04.607 iops : min= 1024, max= 1768, avg=1280.00, stdev=291.19, samples=8 00:23:04.607 lat (usec) : 250=17.59%, 500=45.04%, 750=33.56%, 1000=0.37% 00:23:04.607 lat (msec) : 2=0.05%, 4=0.02%, 50=3.37%, >=2000=0.01% 00:23:04.607 cpu : usr=0.16%, sys=0.21%, ctx=10036, majf=0, minf=2 00:23:04.607 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:04.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.607 issued rwts: total=4911,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:04.607 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:04.607 00:23:04.607 Run status group 0 (all jobs): 00:23:04.607 READ: bw=327KiB/s (335kB/s), 327KiB/s-327KiB/s (335kB/s-335kB/s), io=19.2MiB (20.1MB), run=60008-60008msec 00:23:04.607 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60008-60008msec 00:23:04.607 00:23:04.607 Disk stats (read/write): 00:23:04.607 nvme0n1: ios=4961/5120, merge=0/0, ticks=17127/1333, in_queue=18460, util=100.00% 00:23:04.607 08:23:35 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:04.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:04.607 08:23:36 -- common/autotest_common.sh@1196 -- # local i=0 00:23:04.607 08:23:36 -- common/autotest_common.sh@1197 -- # lsblk -o NAME,SERIAL 00:23:04.607 08:23:36 -- common/autotest_common.sh@1197 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:04.607 08:23:36 -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:04.607 08:23:36 -- common/autotest_common.sh@1204 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:04.607 08:23:36 -- common/autotest_common.sh@1208 -- # return 0 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:04.607 nvmf hotplug test: fio successful as expected 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.607 08:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.607 08:23:36 -- common/autotest_common.sh@10 -- # set +x 00:23:04.607 08:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:04.607 08:23:36 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:04.607 08:23:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:04.607 08:23:36 -- nvmf/common.sh@116 -- # sync 00:23:04.607 08:23:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:04.607 08:23:36 -- nvmf/common.sh@119 -- # set +e 00:23:04.607 08:23:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:04.607 08:23:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:04.607 rmmod nvme_tcp 00:23:04.607 rmmod nvme_fabrics 00:23:04.607 rmmod nvme_keyring 00:23:04.607 08:23:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:04.607 08:23:36 -- nvmf/common.sh@123 -- # set -e 00:23:04.607 08:23:36 -- nvmf/common.sh@124 -- # return 0 00:23:04.607 08:23:36 -- nvmf/common.sh@477 -- # '[' -n 2326654 ']' 00:23:04.607 08:23:36 -- nvmf/common.sh@478 -- # killprocess 2326654 00:23:04.607 08:23:36 -- common/autotest_common.sh@924 -- # '[' -z 2326654 ']' 00:23:04.607 08:23:36 -- common/autotest_common.sh@928 -- # kill -0 2326654 00:23:04.607 08:23:36 -- common/autotest_common.sh@929 -- # uname 00:23:04.607 08:23:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:04.607 08:23:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2326654 00:23:04.607 08:23:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:04.607 08:23:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:04.607 08:23:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2326654' 00:23:04.607 killing process with pid 2326654 00:23:04.607 08:23:36 -- common/autotest_common.sh@943 -- # kill 2326654 00:23:04.607 08:23:36 -- common/autotest_common.sh@948 -- # wait 2326654 00:23:04.607 08:23:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:04.607 08:23:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:04.607 08:23:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:04.607 08:23:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.607 08:23:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:04.607 08:23:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.607 08:23:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.607 08:23:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.866 08:23:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:04.866 00:23:04.867 real 1m12.564s 00:23:04.867 user 4m25.045s 00:23:04.867 sys 0m5.754s 00:23:04.867 08:23:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:04.867 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:23:04.867 ************************************ 00:23:04.867 END TEST nvmf_initiator_timeout 00:23:04.867 ************************************ 00:23:04.867 08:23:38 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:04.867 08:23:38 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:04.867 08:23:38 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:04.867 08:23:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:04.867 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:23:11.427 08:23:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:11.427 08:23:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:11.427 08:23:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:11.427 08:23:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:11.427 08:23:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:11.427 08:23:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:11.427 08:23:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:11.427 08:23:44 -- nvmf/common.sh@294 -- # net_devs=() 00:23:11.427 08:23:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:11.427 08:23:44 -- nvmf/common.sh@295 -- # e810=() 00:23:11.427 08:23:44 -- nvmf/common.sh@295 -- # local -ga e810 00:23:11.427 08:23:44 -- nvmf/common.sh@296 -- # x722=() 00:23:11.427 08:23:44 -- nvmf/common.sh@296 -- # local -ga x722 00:23:11.427 08:23:44 -- nvmf/common.sh@297 -- # mlx=() 00:23:11.427 08:23:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:11.427 08:23:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.427 08:23:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:11.427 08:23:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:11.427 08:23:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:11.427 08:23:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:11.427 08:23:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:11.427 08:23:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:11.428 08:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.428 08:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:11.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:11.428 08:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.428 08:23:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:11.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:11.428 08:23:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:11.428 08:23:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.428 08:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.428 08:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.428 08:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.428 08:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:11.428 Found net devices under 0000:af:00.0: cvl_0_0 00:23:11.428 08:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.428 08:23:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.428 08:23:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.428 08:23:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.428 08:23:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.428 08:23:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:11.428 Found net devices under 0000:af:00.1: cvl_0_1 00:23:11.428 08:23:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.428 08:23:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:11.428 08:23:44 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.428 08:23:44 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:11.428 08:23:44 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:11.428 08:23:44 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:23:11.428 08:23:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:11.428 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:11.428 ************************************ 00:23:11.428 START TEST nvmf_perf_adq 00:23:11.428 ************************************ 00:23:11.428 08:23:44 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:11.428 * Looking for test storage... 00:23:11.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.428 08:23:44 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.428 08:23:44 -- nvmf/common.sh@7 -- # uname -s 00:23:11.428 08:23:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.428 08:23:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.428 08:23:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.428 08:23:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.428 08:23:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.428 08:23:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.428 08:23:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.428 08:23:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.428 08:23:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.428 08:23:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.428 08:23:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:11.428 08:23:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:11.428 08:23:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.428 08:23:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.428 08:23:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.428 08:23:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.428 08:23:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.428 08:23:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.428 08:23:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.428 08:23:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.428 08:23:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.428 08:23:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.428 08:23:44 -- paths/export.sh@5 -- # export PATH 00:23:11.428 08:23:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.428 08:23:44 -- nvmf/common.sh@46 -- # : 0 00:23:11.428 08:23:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:11.428 08:23:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:11.428 08:23:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:11.428 08:23:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.428 08:23:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.428 08:23:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:11.428 08:23:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:11.428 08:23:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:11.428 08:23:44 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:11.428 08:23:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:11.428 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:16.736 08:23:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.736 08:23:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:16.736 08:23:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:16.736 08:23:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:16.736 08:23:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:16.736 08:23:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:16.736 08:23:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:16.736 08:23:50 -- nvmf/common.sh@294 -- # net_devs=() 00:23:16.736 08:23:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:16.736 08:23:50 -- nvmf/common.sh@295 -- # e810=() 00:23:16.736 08:23:50 -- nvmf/common.sh@295 -- # local -ga e810 00:23:16.737 08:23:50 -- nvmf/common.sh@296 -- # x722=() 00:23:16.737 08:23:50 -- nvmf/common.sh@296 -- # local -ga x722 00:23:16.737 08:23:50 -- nvmf/common.sh@297 -- # mlx=() 00:23:16.737 08:23:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:16.737 08:23:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.737 08:23:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:16.737 08:23:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:16.737 08:23:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:16.737 08:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.737 08:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.737 08:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:16.737 08:23:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.737 08:23:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:16.737 08:23:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:16.737 08:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.737 08:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.737 08:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.737 08:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.737 08:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.737 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.737 08:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.737 08:23:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:16.737 08:23:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.737 08:23:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:16.737 08:23:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.737 08:23:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.737 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.737 08:23:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.737 08:23:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:16.737 08:23:50 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.737 08:23:50 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:16.737 08:23:50 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:16.737 08:23:50 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:16.737 08:23:50 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:18.109 08:23:51 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:20.005 08:23:53 -- target/perf_adq.sh@54 -- # sleep 5 00:23:25.274 08:23:58 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:25.274 08:23:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:25.274 08:23:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.274 08:23:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.274 08:23:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.274 08:23:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.274 08:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.274 08:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.274 08:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.274 08:23:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:25.274 08:23:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:25.274 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 08:23:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.274 08:23:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:25.274 08:23:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:25.274 08:23:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:25.274 08:23:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:25.274 08:23:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:25.274 08:23:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:25.274 08:23:58 -- nvmf/common.sh@294 -- # net_devs=() 00:23:25.274 08:23:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:25.274 08:23:58 -- nvmf/common.sh@295 -- # e810=() 00:23:25.274 08:23:58 -- nvmf/common.sh@295 -- # local -ga e810 00:23:25.274 08:23:58 -- nvmf/common.sh@296 -- # x722=() 00:23:25.274 08:23:58 -- nvmf/common.sh@296 -- # local -ga x722 00:23:25.274 08:23:58 -- nvmf/common.sh@297 -- # mlx=() 00:23:25.274 08:23:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:25.274 08:23:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.274 08:23:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.274 08:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:25.274 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:25.274 08:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:25.274 08:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:25.274 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:25.274 08:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.274 08:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.274 08:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.274 08:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:25.274 Found net devices under 0000:af:00.0: cvl_0_0 00:23:25.274 08:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:25.274 08:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.274 08:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.274 08:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:25.274 Found net devices under 0000:af:00.1: cvl_0_1 00:23:25.274 08:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:25.274 08:23:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:25.274 08:23:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.274 08:23:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.274 08:23:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:25.274 08:23:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.274 08:23:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.274 08:23:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:25.274 08:23:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.274 08:23:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.274 08:23:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:25.274 08:23:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:25.274 08:23:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.274 08:23:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.274 08:23:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.274 08:23:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.274 08:23:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:25.274 08:23:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.274 08:23:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.274 08:23:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.274 08:23:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:25.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:23:25.274 00:23:25.274 --- 10.0.0.2 ping statistics --- 00:23:25.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.274 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:25.274 08:23:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:25.274 00:23:25.274 --- 10.0.0.1 ping statistics --- 00:23:25.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.274 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:25.274 08:23:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.274 08:23:58 -- nvmf/common.sh@410 -- # return 0 00:23:25.274 08:23:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:25.274 08:23:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.274 08:23:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:25.274 08:23:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.274 08:23:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:25.274 08:23:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:25.274 08:23:58 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:25.274 08:23:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.274 08:23:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:25.274 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 08:23:58 -- nvmf/common.sh@469 -- # nvmfpid=2345897 00:23:25.274 08:23:58 -- nvmf/common.sh@470 -- # waitforlisten 2345897 00:23:25.274 08:23:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:25.274 08:23:58 -- common/autotest_common.sh@817 -- # '[' -z 2345897 ']' 00:23:25.274 08:23:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.274 08:23:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:25.274 08:23:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.274 08:23:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:25.274 08:23:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.274 [2024-02-13 08:23:58.672958] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:23:25.274 [2024-02-13 08:23:58.673003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.274 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.274 [2024-02-13 08:23:58.737611] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.274 [2024-02-13 08:23:58.813542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.275 [2024-02-13 08:23:58.813662] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.275 [2024-02-13 08:23:58.813670] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.275 [2024-02-13 08:23:58.813677] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.275 [2024-02-13 08:23:58.813746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.275 [2024-02-13 08:23:58.813849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.275 [2024-02-13 08:23:58.813937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.275 [2024-02-13 08:23:58.813938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.840 08:23:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.840 08:23:59 -- common/autotest_common.sh@850 -- # return 0 00:23:25.840 08:23:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:25.840 08:23:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:25.840 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:25.840 08:23:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.840 08:23:59 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:25.840 08:23:59 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:25.840 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.840 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:25.840 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.840 08:23:59 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:25.840 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.840 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:26.098 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.098 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 [2024-02-13 08:23:59.608742] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:26.098 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.098 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 Malloc1 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.098 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.098 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:26.098 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.098 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.098 08:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.098 08:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:26.098 [2024-02-13 08:23:59.660306] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.098 08:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.098 08:23:59 -- target/perf_adq.sh@73 -- # perfpid=2346147 00:23:26.098 08:23:59 -- target/perf_adq.sh@74 -- # sleep 2 00:23:26.098 08:23:59 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:26.098 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.999 08:24:01 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:27.999 08:24:01 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:27.999 08:24:01 -- target/perf_adq.sh@76 -- # wc -l 00:23:27.999 08:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.999 08:24:01 -- common/autotest_common.sh@10 -- # set +x 00:23:28.256 08:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.257 08:24:01 -- target/perf_adq.sh@76 -- # count=4 00:23:28.257 08:24:01 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:28.257 08:24:01 -- target/perf_adq.sh@81 -- # wait 2346147 00:23:36.363 Initializing NVMe Controllers 00:23:36.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:36.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:36.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:36.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:36.363 Initialization complete. Launching workers. 00:23:36.363 ======================================================== 00:23:36.363 Latency(us) 00:23:36.363 Device Information : IOPS MiB/s Average min max 00:23:36.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11727.87 45.81 5457.19 2314.39 8695.79 00:23:36.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11822.77 46.18 5413.19 1599.89 10403.54 00:23:36.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11679.27 45.62 5479.51 1990.28 10247.01 00:23:36.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11787.87 46.05 5428.91 2050.38 12636.17 00:23:36.363 ======================================================== 00:23:36.363 Total : 47017.77 183.66 5444.58 1599.89 12636.17 00:23:36.363 00:23:36.363 08:24:09 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:36.363 08:24:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:36.363 08:24:09 -- nvmf/common.sh@116 -- # sync 00:23:36.363 08:24:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:36.363 08:24:09 -- nvmf/common.sh@119 -- # set +e 00:23:36.363 08:24:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:36.363 08:24:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:36.363 rmmod nvme_tcp 00:23:36.363 rmmod nvme_fabrics 00:23:36.363 rmmod nvme_keyring 00:23:36.363 08:24:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:36.363 08:24:09 -- nvmf/common.sh@123 -- # set -e 00:23:36.363 08:24:09 -- nvmf/common.sh@124 -- # return 0 00:23:36.363 08:24:09 -- nvmf/common.sh@477 -- # '[' -n 2345897 ']' 00:23:36.363 08:24:09 -- nvmf/common.sh@478 -- # killprocess 2345897 00:23:36.363 08:24:09 -- common/autotest_common.sh@924 -- # '[' -z 2345897 ']' 00:23:36.363 08:24:09 -- common/autotest_common.sh@928 -- # kill -0 2345897 00:23:36.363 08:24:09 -- common/autotest_common.sh@929 -- # uname 00:23:36.363 08:24:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:36.363 08:24:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2345897 00:23:36.363 08:24:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:36.363 08:24:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:36.363 08:24:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2345897' 00:23:36.363 killing process with pid 2345897 00:23:36.363 08:24:09 -- common/autotest_common.sh@943 -- # kill 2345897 00:23:36.363 08:24:09 -- common/autotest_common.sh@948 -- # wait 2345897 00:23:36.620 08:24:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:36.620 08:24:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:36.620 08:24:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:36.620 08:24:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.620 08:24:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:36.620 08:24:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.620 08:24:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.620 08:24:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.520 08:24:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:38.520 08:24:12 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:38.520 08:24:12 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:39.894 08:24:13 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:41.838 08:24:15 -- target/perf_adq.sh@54 -- # sleep 5 00:23:47.115 08:24:20 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:47.115 08:24:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:47.115 08:24:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.116 08:24:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:47.116 08:24:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:47.116 08:24:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:47.116 08:24:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.116 08:24:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.116 08:24:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.116 08:24:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:47.116 08:24:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:47.116 08:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.116 08:24:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:47.116 08:24:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:47.116 08:24:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:47.116 08:24:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:47.116 08:24:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:47.116 08:24:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:47.116 08:24:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:47.116 08:24:20 -- nvmf/common.sh@294 -- # net_devs=() 00:23:47.116 08:24:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:47.116 08:24:20 -- nvmf/common.sh@295 -- # e810=() 00:23:47.116 08:24:20 -- nvmf/common.sh@295 -- # local -ga e810 00:23:47.116 08:24:20 -- nvmf/common.sh@296 -- # x722=() 00:23:47.116 08:24:20 -- nvmf/common.sh@296 -- # local -ga x722 00:23:47.116 08:24:20 -- nvmf/common.sh@297 -- # mlx=() 00:23:47.116 08:24:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:47.116 08:24:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.116 08:24:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.116 08:24:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:47.116 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:47.116 08:24:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.116 08:24:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:47.116 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:47.116 08:24:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.116 08:24:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.116 08:24:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.116 08:24:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:47.116 Found net devices under 0000:af:00.0: cvl_0_0 00:23:47.116 08:24:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.116 08:24:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.116 08:24:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.116 08:24:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:47.116 Found net devices under 0000:af:00.1: cvl_0_1 00:23:47.116 08:24:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:47.116 08:24:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:47.116 08:24:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.116 08:24:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.116 08:24:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:47.116 08:24:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.116 08:24:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.116 08:24:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:47.116 08:24:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.116 08:24:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.116 08:24:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:47.116 08:24:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:47.116 08:24:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.116 08:24:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.116 08:24:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.116 08:24:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.116 08:24:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:47.116 08:24:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.116 08:24:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.116 08:24:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.116 08:24:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:47.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:47.116 00:23:47.116 --- 10.0.0.2 ping statistics --- 00:23:47.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.116 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:47.116 08:24:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:23:47.116 00:23:47.116 --- 10.0.0.1 ping statistics --- 00:23:47.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.116 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:23:47.116 08:24:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.116 08:24:20 -- nvmf/common.sh@410 -- # return 0 00:23:47.116 08:24:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:47.116 08:24:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.116 08:24:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:47.116 08:24:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.116 08:24:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:47.116 08:24:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:47.116 08:24:20 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:47.116 08:24:20 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:47.116 08:24:20 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:47.116 08:24:20 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:47.116 net.core.busy_poll = 1 00:23:47.116 08:24:20 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:47.116 net.core.busy_read = 1 00:23:47.116 08:24:20 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:47.116 08:24:20 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:47.116 08:24:20 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:47.116 08:24:20 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:47.116 08:24:20 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:47.376 08:24:20 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:47.376 08:24:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:47.376 08:24:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:47.376 08:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.376 08:24:20 -- nvmf/common.sh@469 -- # nvmfpid=2350453 00:23:47.376 08:24:20 -- nvmf/common.sh@470 -- # waitforlisten 2350453 00:23:47.376 08:24:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:47.376 08:24:20 -- common/autotest_common.sh@817 -- # '[' -z 2350453 ']' 00:23:47.376 08:24:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.376 08:24:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:47.376 08:24:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.376 08:24:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:47.376 08:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.376 [2024-02-13 08:24:20.858475] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:23:47.376 [2024-02-13 08:24:20.858525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.376 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.376 [2024-02-13 08:24:20.923002] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.376 [2024-02-13 08:24:20.998087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:47.376 [2024-02-13 08:24:20.998207] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.376 [2024-02-13 08:24:20.998215] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.376 [2024-02-13 08:24:20.998222] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.376 [2024-02-13 08:24:20.998264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.376 [2024-02-13 08:24:20.998377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.376 [2024-02-13 08:24:20.998468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.376 [2024-02-13 08:24:20.998469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.312 08:24:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:48.312 08:24:21 -- common/autotest_common.sh@850 -- # return 0 00:23:48.312 08:24:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:48.312 08:24:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 08:24:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.312 08:24:21 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:48.312 08:24:21 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 [2024-02-13 08:24:21.798335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 Malloc1 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.312 08:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.312 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 [2024-02-13 08:24:21.841578] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.312 08:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.312 08:24:21 -- target/perf_adq.sh@94 -- # perfpid=2350670 00:23:48.312 08:24:21 -- target/perf_adq.sh@95 -- # sleep 2 00:23:48.312 08:24:21 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:48.312 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.209 08:24:23 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:50.209 08:24:23 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:50.209 08:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.209 08:24:23 -- target/perf_adq.sh@97 -- # wc -l 00:23:50.209 08:24:23 -- common/autotest_common.sh@10 -- # set +x 00:23:50.209 08:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.209 08:24:23 -- target/perf_adq.sh@97 -- # count=2 00:23:50.209 08:24:23 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:50.209 08:24:23 -- target/perf_adq.sh@103 -- # wait 2350670 00:23:58.313 Initializing NVMe Controllers 00:23:58.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:58.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:58.313 Initialization complete. Launching workers. 00:23:58.313 ======================================================== 00:23:58.313 Latency(us) 00:23:58.313 Device Information : IOPS MiB/s Average min max 00:23:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5456.10 21.31 11776.81 1712.30 56709.67 00:23:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6450.10 25.20 9954.76 1705.94 55194.32 00:23:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6356.90 24.83 10068.20 1746.54 54931.21 00:23:58.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14140.10 55.23 4534.81 1416.02 45932.53 00:23:58.313 ======================================================== 00:23:58.313 Total : 32403.20 126.57 7918.66 1416.02 56709.67 00:23:58.313 00:23:58.572 08:24:31 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:58.572 08:24:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:58.572 08:24:32 -- nvmf/common.sh@116 -- # sync 00:23:58.572 08:24:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:58.572 08:24:32 -- nvmf/common.sh@119 -- # set +e 00:23:58.572 08:24:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:58.572 08:24:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:58.572 rmmod nvme_tcp 00:23:58.572 rmmod nvme_fabrics 00:23:58.572 rmmod nvme_keyring 00:23:58.572 08:24:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:58.572 08:24:32 -- nvmf/common.sh@123 -- # set -e 00:23:58.572 08:24:32 -- nvmf/common.sh@124 -- # return 0 00:23:58.572 08:24:32 -- nvmf/common.sh@477 -- # '[' -n 2350453 ']' 00:23:58.572 08:24:32 -- nvmf/common.sh@478 -- # killprocess 2350453 00:23:58.572 08:24:32 -- common/autotest_common.sh@924 -- # '[' -z 2350453 ']' 00:23:58.572 08:24:32 -- common/autotest_common.sh@928 -- # kill -0 2350453 00:23:58.572 08:24:32 -- common/autotest_common.sh@929 -- # uname 00:23:58.572 08:24:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:58.572 08:24:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2350453 00:23:58.572 08:24:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:58.572 08:24:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:58.572 08:24:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2350453' 00:23:58.572 killing process with pid 2350453 00:23:58.572 08:24:32 -- common/autotest_common.sh@943 -- # kill 2350453 00:23:58.572 08:24:32 -- common/autotest_common.sh@948 -- # wait 2350453 00:23:58.831 08:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:58.831 08:24:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:58.831 08:24:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:58.831 08:24:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.831 08:24:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:58.831 08:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.831 08:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.831 08:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.121 08:24:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:02.121 08:24:35 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:02.121 00:24:02.121 real 0m51.130s 00:24:02.121 user 2m48.290s 00:24:02.121 sys 0m10.512s 00:24:02.121 08:24:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:02.121 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:24:02.121 ************************************ 00:24:02.121 END TEST nvmf_perf_adq 00:24:02.121 ************************************ 00:24:02.121 08:24:35 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:02.121 08:24:35 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:02.121 08:24:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:02.121 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:24:02.121 ************************************ 00:24:02.121 START TEST nvmf_shutdown 00:24:02.121 ************************************ 00:24:02.121 08:24:35 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:02.121 * Looking for test storage... 00:24:02.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.121 08:24:35 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.121 08:24:35 -- nvmf/common.sh@7 -- # uname -s 00:24:02.121 08:24:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.121 08:24:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.121 08:24:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.121 08:24:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.121 08:24:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.121 08:24:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.121 08:24:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.121 08:24:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.121 08:24:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.121 08:24:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.121 08:24:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:02.121 08:24:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:02.121 08:24:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.121 08:24:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.121 08:24:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.121 08:24:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.121 08:24:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.121 08:24:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.121 08:24:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.121 08:24:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.121 08:24:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.121 08:24:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.121 08:24:35 -- paths/export.sh@5 -- # export PATH 00:24:02.121 08:24:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.121 08:24:35 -- nvmf/common.sh@46 -- # : 0 00:24:02.121 08:24:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.121 08:24:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.121 08:24:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.121 08:24:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.121 08:24:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.121 08:24:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.121 08:24:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.121 08:24:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.121 08:24:35 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.121 08:24:35 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.121 08:24:35 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:02.121 08:24:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:02.121 08:24:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:02.121 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:24:02.121 ************************************ 00:24:02.121 START TEST nvmf_shutdown_tc1 00:24:02.121 ************************************ 00:24:02.121 08:24:35 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc1 00:24:02.121 08:24:35 -- target/shutdown.sh@74 -- # starttarget 00:24:02.121 08:24:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:02.121 08:24:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.121 08:24:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.121 08:24:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.121 08:24:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.121 08:24:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.121 08:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.121 08:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.121 08:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.121 08:24:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.121 08:24:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.121 08:24:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.121 08:24:35 -- common/autotest_common.sh@10 -- # set +x 00:24:07.387 08:24:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.387 08:24:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:07.387 08:24:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:07.387 08:24:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:07.387 08:24:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:07.387 08:24:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:07.387 08:24:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:07.387 08:24:40 -- nvmf/common.sh@294 -- # net_devs=() 00:24:07.387 08:24:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:07.387 08:24:40 -- nvmf/common.sh@295 -- # e810=() 00:24:07.387 08:24:40 -- nvmf/common.sh@295 -- # local -ga e810 00:24:07.387 08:24:40 -- nvmf/common.sh@296 -- # x722=() 00:24:07.387 08:24:40 -- nvmf/common.sh@296 -- # local -ga x722 00:24:07.387 08:24:40 -- nvmf/common.sh@297 -- # mlx=() 00:24:07.387 08:24:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:07.387 08:24:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.387 08:24:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:07.387 08:24:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:07.387 08:24:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.387 08:24:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:07.387 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:07.387 08:24:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.387 08:24:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:07.387 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:07.387 08:24:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.387 08:24:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.387 08:24:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.387 08:24:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:07.387 Found net devices under 0000:af:00.0: cvl_0_0 00:24:07.387 08:24:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.387 08:24:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.387 08:24:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.387 08:24:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.387 08:24:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:07.387 Found net devices under 0000:af:00.1: cvl_0_1 00:24:07.387 08:24:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.387 08:24:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:07.387 08:24:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:07.387 08:24:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:07.387 08:24:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.387 08:24:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.387 08:24:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.387 08:24:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:07.387 08:24:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.387 08:24:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.387 08:24:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:07.387 08:24:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.387 08:24:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.387 08:24:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:07.387 08:24:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:07.387 08:24:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.387 08:24:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.388 08:24:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.388 08:24:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.388 08:24:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:07.388 08:24:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.645 08:24:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.645 08:24:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.645 08:24:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:07.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:24:07.646 00:24:07.646 --- 10.0.0.2 ping statistics --- 00:24:07.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.646 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:07.646 08:24:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:24:07.646 00:24:07.646 --- 10.0.0.1 ping statistics --- 00:24:07.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.646 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:24:07.646 08:24:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.646 08:24:41 -- nvmf/common.sh@410 -- # return 0 00:24:07.646 08:24:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:07.646 08:24:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.646 08:24:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:07.646 08:24:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:07.646 08:24:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.646 08:24:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:07.646 08:24:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:07.646 08:24:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:07.646 08:24:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:07.646 08:24:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:07.646 08:24:41 -- common/autotest_common.sh@10 -- # set +x 00:24:07.646 08:24:41 -- nvmf/common.sh@469 -- # nvmfpid=2356208 00:24:07.646 08:24:41 -- nvmf/common.sh@470 -- # waitforlisten 2356208 00:24:07.646 08:24:41 -- common/autotest_common.sh@817 -- # '[' -z 2356208 ']' 00:24:07.646 08:24:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.646 08:24:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:07.646 08:24:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.646 08:24:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:07.646 08:24:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:07.646 08:24:41 -- common/autotest_common.sh@10 -- # set +x 00:24:07.646 [2024-02-13 08:24:41.226901] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:07.646 [2024-02-13 08:24:41.226946] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.646 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.646 [2024-02-13 08:24:41.288859] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.904 [2024-02-13 08:24:41.370524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:07.904 [2024-02-13 08:24:41.370628] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.904 [2024-02-13 08:24:41.370635] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.904 [2024-02-13 08:24:41.370642] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.904 [2024-02-13 08:24:41.370741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.904 [2024-02-13 08:24:41.370834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.904 [2024-02-13 08:24:41.370939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.904 [2024-02-13 08:24:41.370940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:08.469 08:24:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.469 08:24:42 -- common/autotest_common.sh@850 -- # return 0 00:24:08.469 08:24:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:08.469 08:24:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:08.469 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:08.469 08:24:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.469 08:24:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.469 08:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.469 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:08.469 [2024-02-13 08:24:42.062858] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.469 08:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.469 08:24:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:08.469 08:24:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:08.469 08:24:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:08.469 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:08.469 08:24:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.469 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.469 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.470 08:24:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.470 08:24:42 -- target/shutdown.sh@28 -- # cat 00:24:08.470 08:24:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:08.470 08:24:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.470 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:08.470 Malloc1 00:24:08.733 [2024-02-13 08:24:42.158657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.733 Malloc2 00:24:08.733 Malloc3 00:24:08.733 Malloc4 00:24:08.733 Malloc5 00:24:08.733 Malloc6 00:24:08.733 Malloc7 00:24:09.050 Malloc8 00:24:09.050 Malloc9 00:24:09.050 Malloc10 00:24:09.050 08:24:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.050 08:24:42 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:09.050 08:24:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:09.050 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 08:24:42 -- target/shutdown.sh@78 -- # perfpid=2356483 00:24:09.050 08:24:42 -- target/shutdown.sh@79 -- # waitforlisten 2356483 /var/tmp/bdevperf.sock 00:24:09.050 08:24:42 -- common/autotest_common.sh@817 -- # '[' -z 2356483 ']' 00:24:09.050 08:24:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.050 08:24:42 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:09.050 08:24:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.050 08:24:42 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:09.050 08:24:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.050 08:24:42 -- nvmf/common.sh@520 -- # config=() 00:24:09.050 08:24:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.050 08:24:42 -- nvmf/common.sh@520 -- # local subsystem config 00:24:09.050 08:24:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 [2024-02-13 08:24:42.630104] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:09.050 [2024-02-13 08:24:42.630150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.050 "params": { 00:24:09.050 "name": "Nvme$subsystem", 00:24:09.050 "trtype": "$TEST_TRANSPORT", 00:24:09.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.050 "adrfam": "ipv4", 00:24:09.050 "trsvcid": "$NVMF_PORT", 00:24:09.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.050 "hdgst": ${hdgst:-false}, 00:24:09.050 "ddgst": ${ddgst:-false} 00:24:09.050 }, 00:24:09.050 "method": "bdev_nvme_attach_controller" 00:24:09.050 } 00:24:09.050 EOF 00:24:09.050 )") 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.050 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.050 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.050 { 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme$subsystem", 00:24:09.051 "trtype": "$TEST_TRANSPORT", 00:24:09.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "$NVMF_PORT", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.051 "hdgst": ${hdgst:-false}, 00:24:09.051 "ddgst": ${ddgst:-false} 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 } 00:24:09.051 EOF 00:24:09.051 )") 00:24:09.051 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.051 08:24:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:09.051 08:24:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:09.051 { 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme$subsystem", 00:24:09.051 "trtype": "$TEST_TRANSPORT", 00:24:09.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "$NVMF_PORT", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:09.051 "hdgst": ${hdgst:-false}, 00:24:09.051 "ddgst": ${ddgst:-false} 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 } 00:24:09.051 EOF 00:24:09.051 )") 00:24:09.051 08:24:42 -- nvmf/common.sh@542 -- # cat 00:24:09.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.051 08:24:42 -- nvmf/common.sh@544 -- # jq . 00:24:09.051 08:24:42 -- nvmf/common.sh@545 -- # IFS=, 00:24:09.051 08:24:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme1", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme2", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme3", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme4", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme5", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme6", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme7", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme8", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme9", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 },{ 00:24:09.051 "params": { 00:24:09.051 "name": "Nvme10", 00:24:09.051 "trtype": "tcp", 00:24:09.051 "traddr": "10.0.0.2", 00:24:09.051 "adrfam": "ipv4", 00:24:09.051 "trsvcid": "4420", 00:24:09.051 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:09.051 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:09.051 "hdgst": false, 00:24:09.051 "ddgst": false 00:24:09.051 }, 00:24:09.051 "method": "bdev_nvme_attach_controller" 00:24:09.051 }' 00:24:09.051 [2024-02-13 08:24:42.692253] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.309 [2024-02-13 08:24:42.763614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.309 [2024-02-13 08:24:42.763669] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:10.682 08:24:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:10.682 08:24:44 -- common/autotest_common.sh@850 -- # return 0 00:24:10.682 08:24:44 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:10.682 08:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.682 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.682 08:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.682 08:24:44 -- target/shutdown.sh@83 -- # kill -9 2356483 00:24:10.682 08:24:44 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:10.682 08:24:44 -- target/shutdown.sh@87 -- # sleep 1 00:24:11.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2356483 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:11.617 08:24:45 -- target/shutdown.sh@88 -- # kill -0 2356208 00:24:11.617 08:24:45 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:11.617 08:24:45 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:11.617 08:24:45 -- nvmf/common.sh@520 -- # config=() 00:24:11.617 08:24:45 -- nvmf/common.sh@520 -- # local subsystem config 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 [2024-02-13 08:24:45.153588] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:11.617 [2024-02-13 08:24:45.153655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356978 ] 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.617 "hdgst": ${hdgst:-false}, 00:24:11.617 "ddgst": ${ddgst:-false} 00:24:11.617 }, 00:24:11.617 "method": "bdev_nvme_attach_controller" 00:24:11.617 } 00:24:11.617 EOF 00:24:11.617 )") 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.617 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.617 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.617 { 00:24:11.617 "params": { 00:24:11.617 "name": "Nvme$subsystem", 00:24:11.617 "trtype": "$TEST_TRANSPORT", 00:24:11.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.617 "adrfam": "ipv4", 00:24:11.617 "trsvcid": "$NVMF_PORT", 00:24:11.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.618 "hdgst": ${hdgst:-false}, 00:24:11.618 "ddgst": ${ddgst:-false} 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 } 00:24:11.618 EOF 00:24:11.618 )") 00:24:11.618 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.618 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.618 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.618 { 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme$subsystem", 00:24:11.618 "trtype": "$TEST_TRANSPORT", 00:24:11.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "$NVMF_PORT", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.618 "hdgst": ${hdgst:-false}, 00:24:11.618 "ddgst": ${ddgst:-false} 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 } 00:24:11.618 EOF 00:24:11.618 )") 00:24:11.618 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.618 08:24:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:11.618 08:24:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:11.618 { 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme$subsystem", 00:24:11.618 "trtype": "$TEST_TRANSPORT", 00:24:11.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "$NVMF_PORT", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.618 "hdgst": ${hdgst:-false}, 00:24:11.618 "ddgst": ${ddgst:-false} 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 } 00:24:11.618 EOF 00:24:11.618 )") 00:24:11.618 08:24:45 -- nvmf/common.sh@542 -- # cat 00:24:11.618 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.618 08:24:45 -- nvmf/common.sh@544 -- # jq . 00:24:11.618 08:24:45 -- nvmf/common.sh@545 -- # IFS=, 00:24:11.618 08:24:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme1", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme2", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme3", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme4", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme5", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme6", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme7", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme8", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme9", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 },{ 00:24:11.618 "params": { 00:24:11.618 "name": "Nvme10", 00:24:11.618 "trtype": "tcp", 00:24:11.618 "traddr": "10.0.0.2", 00:24:11.618 "adrfam": "ipv4", 00:24:11.618 "trsvcid": "4420", 00:24:11.618 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:11.618 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:11.618 "hdgst": false, 00:24:11.618 "ddgst": false 00:24:11.618 }, 00:24:11.618 "method": "bdev_nvme_attach_controller" 00:24:11.618 }' 00:24:11.618 [2024-02-13 08:24:45.217153] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.618 [2024-02-13 08:24:45.287160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.618 [2024-02-13 08:24:45.287218] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:13.003 Running I/O for 1 seconds... 00:24:13.939 00:24:13.939 Latency(us) 00:24:13.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.939 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme1n1 : 1.05 461.46 28.84 0.00 0.00 135387.85 8675.72 129823.70 00:24:13.939 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme2n1 : 1.07 488.42 30.53 0.00 0.00 126959.65 27088.21 106854.89 00:24:13.939 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme3n1 : 1.07 452.34 28.27 0.00 0.00 136945.88 25215.76 139810.13 00:24:13.939 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme4n1 : 1.08 481.92 30.12 0.00 0.00 128396.40 16602.45 104857.60 00:24:13.939 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme5n1 : 1.08 527.86 32.99 0.00 0.00 116796.38 14168.26 98865.74 00:24:13.939 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme6n1 : 1.08 489.00 30.56 0.00 0.00 125578.37 7770.70 103359.63 00:24:13.939 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme7n1 : 1.07 450.75 28.17 0.00 0.00 133727.94 31332.45 102860.31 00:24:13.939 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme8n1 : 1.13 464.35 29.02 0.00 0.00 126284.79 10360.93 112347.43 00:24:13.939 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme9n1 : 1.09 518.62 32.41 0.00 0.00 116605.11 7396.21 96369.13 00:24:13.939 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.939 Verification LBA range: start 0x0 length 0x400 00:24:13.939 Nvme10n1 : 1.09 517.27 32.33 0.00 0.00 116430.24 4493.90 97367.77 00:24:13.939 =================================================================================================================== 00:24:13.939 Total : 4852.01 303.25 0.00 0.00 125868.02 4493.90 139810.13 00:24:13.939 [2024-02-13 08:24:47.594026] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:14.198 08:24:47 -- target/shutdown.sh@93 -- # stoptarget 00:24:14.198 08:24:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:14.198 08:24:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:14.198 08:24:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.198 08:24:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:14.198 08:24:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:14.198 08:24:47 -- nvmf/common.sh@116 -- # sync 00:24:14.198 08:24:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:14.198 08:24:47 -- nvmf/common.sh@119 -- # set +e 00:24:14.198 08:24:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:14.198 08:24:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:14.198 rmmod nvme_tcp 00:24:14.198 rmmod nvme_fabrics 00:24:14.198 rmmod nvme_keyring 00:24:14.198 08:24:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:14.198 08:24:47 -- nvmf/common.sh@123 -- # set -e 00:24:14.198 08:24:47 -- nvmf/common.sh@124 -- # return 0 00:24:14.198 08:24:47 -- nvmf/common.sh@477 -- # '[' -n 2356208 ']' 00:24:14.198 08:24:47 -- nvmf/common.sh@478 -- # killprocess 2356208 00:24:14.198 08:24:47 -- common/autotest_common.sh@924 -- # '[' -z 2356208 ']' 00:24:14.198 08:24:47 -- common/autotest_common.sh@928 -- # kill -0 2356208 00:24:14.198 08:24:47 -- common/autotest_common.sh@929 -- # uname 00:24:14.198 08:24:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:14.198 08:24:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2356208 00:24:14.456 08:24:47 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:14.456 08:24:47 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:14.456 08:24:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2356208' 00:24:14.456 killing process with pid 2356208 00:24:14.456 08:24:47 -- common/autotest_common.sh@943 -- # kill 2356208 00:24:14.456 08:24:47 -- common/autotest_common.sh@948 -- # wait 2356208 00:24:14.715 08:24:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:14.715 08:24:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:14.715 08:24:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:14.715 08:24:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.715 08:24:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:14.715 08:24:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.715 08:24:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.715 08:24:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.250 08:24:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:17.250 00:24:17.250 real 0m14.844s 00:24:17.250 user 0m32.952s 00:24:17.250 sys 0m5.574s 00:24:17.250 08:24:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:17.250 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.250 ************************************ 00:24:17.250 END TEST nvmf_shutdown_tc1 00:24:17.250 ************************************ 00:24:17.250 08:24:50 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:17.250 08:24:50 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:17.250 08:24:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:17.250 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.250 ************************************ 00:24:17.250 START TEST nvmf_shutdown_tc2 00:24:17.250 ************************************ 00:24:17.250 08:24:50 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc2 00:24:17.250 08:24:50 -- target/shutdown.sh@98 -- # starttarget 00:24:17.250 08:24:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:17.250 08:24:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:17.250 08:24:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.250 08:24:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:17.250 08:24:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:17.250 08:24:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:17.250 08:24:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.250 08:24:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.250 08:24:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.250 08:24:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:17.250 08:24:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:17.250 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.250 08:24:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:17.250 08:24:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:17.250 08:24:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:17.250 08:24:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:17.250 08:24:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:17.250 08:24:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:17.250 08:24:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:17.250 08:24:50 -- nvmf/common.sh@294 -- # net_devs=() 00:24:17.250 08:24:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:17.250 08:24:50 -- nvmf/common.sh@295 -- # e810=() 00:24:17.250 08:24:50 -- nvmf/common.sh@295 -- # local -ga e810 00:24:17.250 08:24:50 -- nvmf/common.sh@296 -- # x722=() 00:24:17.250 08:24:50 -- nvmf/common.sh@296 -- # local -ga x722 00:24:17.250 08:24:50 -- nvmf/common.sh@297 -- # mlx=() 00:24:17.250 08:24:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:17.250 08:24:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.250 08:24:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:17.250 08:24:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:17.250 08:24:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:17.250 08:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.250 08:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:17.250 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:17.250 08:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.250 08:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.251 08:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:17.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:17.251 08:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:17.251 08:24:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.251 08:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.251 08:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.251 08:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.251 08:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:17.251 Found net devices under 0000:af:00.0: cvl_0_0 00:24:17.251 08:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.251 08:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.251 08:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.251 08:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.251 08:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.251 08:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:17.251 Found net devices under 0000:af:00.1: cvl_0_1 00:24:17.251 08:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.251 08:24:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:17.251 08:24:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:17.251 08:24:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:17.251 08:24:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.251 08:24:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.251 08:24:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.251 08:24:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:17.251 08:24:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.251 08:24:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.251 08:24:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:17.251 08:24:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.251 08:24:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.251 08:24:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:17.251 08:24:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:17.251 08:24:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.251 08:24:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.251 08:24:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.251 08:24:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.251 08:24:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:17.251 08:24:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.251 08:24:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.251 08:24:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.251 08:24:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:17.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:24:17.251 00:24:17.251 --- 10.0.0.2 ping statistics --- 00:24:17.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.251 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:17.251 08:24:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:17.251 00:24:17.251 --- 10.0.0.1 ping statistics --- 00:24:17.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.251 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:17.251 08:24:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.251 08:24:50 -- nvmf/common.sh@410 -- # return 0 00:24:17.251 08:24:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:17.251 08:24:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.251 08:24:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:17.251 08:24:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.251 08:24:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:17.251 08:24:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:17.251 08:24:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:17.251 08:24:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:17.251 08:24:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:17.251 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.251 08:24:50 -- nvmf/common.sh@469 -- # nvmfpid=2357995 00:24:17.251 08:24:50 -- nvmf/common.sh@470 -- # waitforlisten 2357995 00:24:17.251 08:24:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:17.251 08:24:50 -- common/autotest_common.sh@817 -- # '[' -z 2357995 ']' 00:24:17.251 08:24:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.251 08:24:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:17.251 08:24:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.251 08:24:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:17.251 08:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.251 [2024-02-13 08:24:50.806219] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:17.251 [2024-02-13 08:24:50.806278] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.251 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.251 [2024-02-13 08:24:50.871479] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.509 [2024-02-13 08:24:50.950939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:17.509 [2024-02-13 08:24:50.951039] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.509 [2024-02-13 08:24:50.951046] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.509 [2024-02-13 08:24:50.951052] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.509 [2024-02-13 08:24:50.951146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.509 [2024-02-13 08:24:50.951218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.509 [2024-02-13 08:24:50.951323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.509 [2024-02-13 08:24:50.951324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:18.076 08:24:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:18.076 08:24:51 -- common/autotest_common.sh@850 -- # return 0 00:24:18.076 08:24:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:18.076 08:24:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:18.076 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.076 08:24:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.076 08:24:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.076 08:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.076 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.076 [2024-02-13 08:24:51.660861] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.076 08:24:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.076 08:24:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:18.076 08:24:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:18.077 08:24:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:18.077 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.077 08:24:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:18.077 08:24:51 -- target/shutdown.sh@28 -- # cat 00:24:18.077 08:24:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:18.077 08:24:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.077 08:24:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.077 Malloc1 00:24:18.077 [2024-02-13 08:24:51.752173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.336 Malloc2 00:24:18.336 Malloc3 00:24:18.336 Malloc4 00:24:18.336 Malloc5 00:24:18.336 Malloc6 00:24:18.336 Malloc7 00:24:18.596 Malloc8 00:24:18.596 Malloc9 00:24:18.596 Malloc10 00:24:18.596 08:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.596 08:24:52 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:18.596 08:24:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:18.596 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.596 08:24:52 -- target/shutdown.sh@102 -- # perfpid=2358281 00:24:18.596 08:24:52 -- target/shutdown.sh@103 -- # waitforlisten 2358281 /var/tmp/bdevperf.sock 00:24:18.596 08:24:52 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:18.596 08:24:52 -- common/autotest_common.sh@817 -- # '[' -z 2358281 ']' 00:24:18.597 08:24:52 -- nvmf/common.sh@520 -- # config=() 00:24:18.597 08:24:52 -- nvmf/common.sh@520 -- # local subsystem config 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:18.597 08:24:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:18.597 08:24:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.597 08:24:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:18.597 08:24:52 -- common/autotest_common.sh@10 -- # set +x 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 [2024-02-13 08:24:52.207116] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:18.597 [2024-02-13 08:24:52.207161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358281 ] 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:18.597 { 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme$subsystem", 00:24:18.597 "trtype": "$TEST_TRANSPORT", 00:24:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "$NVMF_PORT", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.597 "hdgst": ${hdgst:-false}, 00:24:18.597 "ddgst": ${ddgst:-false} 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 } 00:24:18.597 EOF 00:24:18.597 )") 00:24:18.597 08:24:52 -- nvmf/common.sh@542 -- # cat 00:24:18.597 08:24:52 -- nvmf/common.sh@544 -- # jq . 00:24:18.597 08:24:52 -- nvmf/common.sh@545 -- # IFS=, 00:24:18.597 08:24:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:18.597 "params": { 00:24:18.597 "name": "Nvme1", 00:24:18.597 "trtype": "tcp", 00:24:18.597 "traddr": "10.0.0.2", 00:24:18.597 "adrfam": "ipv4", 00:24:18.597 "trsvcid": "4420", 00:24:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.597 "hdgst": false, 00:24:18.597 "ddgst": false 00:24:18.597 }, 00:24:18.597 "method": "bdev_nvme_attach_controller" 00:24:18.597 },{ 00:24:18.597 "params": { 00:24:18.598 "name": "Nvme2", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme3", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme4", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme5", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme6", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme7", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme8", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme9", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 },{ 00:24:18.598 "params": { 00:24:18.598 "name": "Nvme10", 00:24:18.598 "trtype": "tcp", 00:24:18.598 "traddr": "10.0.0.2", 00:24:18.598 "adrfam": "ipv4", 00:24:18.598 "trsvcid": "4420", 00:24:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:18.598 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:18.598 "hdgst": false, 00:24:18.598 "ddgst": false 00:24:18.598 }, 00:24:18.598 "method": "bdev_nvme_attach_controller" 00:24:18.598 }' 00:24:18.598 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.598 [2024-02-13 08:24:52.270391] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.857 [2024-02-13 08:24:52.340183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.857 [2024-02-13 08:24:52.340239] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:20.234 Running I/O for 10 seconds... 00:24:20.802 08:24:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.802 08:24:54 -- common/autotest_common.sh@850 -- # return 0 00:24:20.802 08:24:54 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:20.802 08:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.802 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.802 08:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.802 08:24:54 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:20.802 08:24:54 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:20.802 08:24:54 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:20.802 08:24:54 -- target/shutdown.sh@57 -- # local ret=1 00:24:20.802 08:24:54 -- target/shutdown.sh@58 -- # local i 00:24:20.802 08:24:54 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:20.802 08:24:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:20.802 08:24:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:20.802 08:24:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:20.802 08:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.802 08:24:54 -- common/autotest_common.sh@10 -- # set +x 00:24:20.802 08:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.802 08:24:54 -- target/shutdown.sh@60 -- # read_io_count=254 00:24:20.802 08:24:54 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:24:20.802 08:24:54 -- target/shutdown.sh@64 -- # ret=0 00:24:20.802 08:24:54 -- target/shutdown.sh@65 -- # break 00:24:20.802 08:24:54 -- target/shutdown.sh@69 -- # return 0 00:24:20.802 08:24:54 -- target/shutdown.sh@109 -- # killprocess 2358281 00:24:20.802 08:24:54 -- common/autotest_common.sh@924 -- # '[' -z 2358281 ']' 00:24:20.802 08:24:54 -- common/autotest_common.sh@928 -- # kill -0 2358281 00:24:20.802 08:24:54 -- common/autotest_common.sh@929 -- # uname 00:24:20.802 08:24:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:20.802 08:24:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2358281 00:24:21.061 08:24:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:21.061 08:24:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:21.061 08:24:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2358281' 00:24:21.061 killing process with pid 2358281 00:24:21.061 08:24:54 -- common/autotest_common.sh@943 -- # kill 2358281 00:24:21.061 08:24:54 -- common/autotest_common.sh@948 -- # wait 2358281 00:24:21.061 Received shutdown signal, test time was about 0.725591 seconds 00:24:21.061 00:24:21.061 Latency(us) 00:24:21.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.061 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme1n1 : 0.72 495.85 30.99 0.00 0.00 119298.38 19348.72 114344.72 00:24:21.061 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme2n1 : 0.69 517.57 32.35 0.00 0.00 119535.38 16602.45 98366.42 00:24:21.061 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme3n1 : 0.67 469.41 29.34 0.00 0.00 130283.78 17725.93 108352.85 00:24:21.061 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme4n1 : 0.68 523.02 32.69 0.00 0.00 116002.15 17850.76 96868.45 00:24:21.061 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme5n1 : 0.67 467.59 29.22 0.00 0.00 128472.75 18599.74 101362.35 00:24:21.061 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme6n1 : 0.72 489.70 30.61 0.00 0.00 115455.71 16976.94 102860.31 00:24:21.061 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme7n1 : 0.68 519.81 32.49 0.00 0.00 113291.04 17975.59 110849.46 00:24:21.061 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme8n1 : 0.69 474.40 29.65 0.00 0.00 122901.56 11796.48 97867.09 00:24:21.061 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme9n1 : 0.69 455.41 28.46 0.00 0.00 127433.09 14355.50 114844.04 00:24:21.061 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.061 Verification LBA range: start 0x0 length 0x400 00:24:21.061 Nvme10n1 : 0.69 455.86 28.49 0.00 0.00 126527.18 9175.04 122833.19 00:24:21.061 =================================================================================================================== 00:24:21.061 Total : 4868.62 304.29 0.00 0.00 121613.29 9175.04 122833.19 00:24:21.061 [2024-02-13 08:24:54.599272] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:21.319 08:24:54 -- target/shutdown.sh@112 -- # sleep 1 00:24:22.257 08:24:55 -- target/shutdown.sh@113 -- # kill -0 2357995 00:24:22.257 08:24:55 -- target/shutdown.sh@115 -- # stoptarget 00:24:22.258 08:24:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:22.258 08:24:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:22.258 08:24:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:22.258 08:24:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:22.258 08:24:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.258 08:24:55 -- nvmf/common.sh@116 -- # sync 00:24:22.258 08:24:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:22.258 08:24:55 -- nvmf/common.sh@119 -- # set +e 00:24:22.258 08:24:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.258 08:24:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:22.258 rmmod nvme_tcp 00:24:22.258 rmmod nvme_fabrics 00:24:22.258 rmmod nvme_keyring 00:24:22.258 08:24:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:22.258 08:24:55 -- nvmf/common.sh@123 -- # set -e 00:24:22.258 08:24:55 -- nvmf/common.sh@124 -- # return 0 00:24:22.258 08:24:55 -- nvmf/common.sh@477 -- # '[' -n 2357995 ']' 00:24:22.258 08:24:55 -- nvmf/common.sh@478 -- # killprocess 2357995 00:24:22.258 08:24:55 -- common/autotest_common.sh@924 -- # '[' -z 2357995 ']' 00:24:22.258 08:24:55 -- common/autotest_common.sh@928 -- # kill -0 2357995 00:24:22.258 08:24:55 -- common/autotest_common.sh@929 -- # uname 00:24:22.258 08:24:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:22.258 08:24:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2357995 00:24:22.516 08:24:55 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:22.516 08:24:55 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:22.516 08:24:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2357995' 00:24:22.516 killing process with pid 2357995 00:24:22.516 08:24:55 -- common/autotest_common.sh@943 -- # kill 2357995 00:24:22.516 08:24:55 -- common/autotest_common.sh@948 -- # wait 2357995 00:24:22.776 08:24:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:22.776 08:24:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:22.776 08:24:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:22.776 08:24:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.776 08:24:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:22.776 08:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.776 08:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.776 08:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.314 08:24:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:25.314 00:24:25.314 real 0m7.976s 00:24:25.314 user 0m24.238s 00:24:25.314 sys 0m1.350s 00:24:25.314 08:24:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:25.314 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 ************************************ 00:24:25.314 END TEST nvmf_shutdown_tc2 00:24:25.314 ************************************ 00:24:25.314 08:24:58 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:25.314 08:24:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:24:25.314 08:24:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:25.314 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 ************************************ 00:24:25.314 START TEST nvmf_shutdown_tc3 00:24:25.314 ************************************ 00:24:25.314 08:24:58 -- common/autotest_common.sh@1102 -- # nvmf_shutdown_tc3 00:24:25.314 08:24:58 -- target/shutdown.sh@120 -- # starttarget 00:24:25.314 08:24:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:25.314 08:24:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:25.314 08:24:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.314 08:24:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:25.314 08:24:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:25.314 08:24:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:25.315 08:24:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.315 08:24:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.315 08:24:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.315 08:24:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:25.315 08:24:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:25.315 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.315 08:24:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:25.315 08:24:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:25.315 08:24:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:25.315 08:24:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:25.315 08:24:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:25.315 08:24:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:25.315 08:24:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:25.315 08:24:58 -- nvmf/common.sh@294 -- # net_devs=() 00:24:25.315 08:24:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:25.315 08:24:58 -- nvmf/common.sh@295 -- # e810=() 00:24:25.315 08:24:58 -- nvmf/common.sh@295 -- # local -ga e810 00:24:25.315 08:24:58 -- nvmf/common.sh@296 -- # x722=() 00:24:25.315 08:24:58 -- nvmf/common.sh@296 -- # local -ga x722 00:24:25.315 08:24:58 -- nvmf/common.sh@297 -- # mlx=() 00:24:25.315 08:24:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:25.315 08:24:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.315 08:24:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.315 08:24:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:25.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:25.315 08:24:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.315 08:24:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:25.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:25.315 08:24:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.315 08:24:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.315 08:24:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.315 08:24:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:25.315 Found net devices under 0000:af:00.0: cvl_0_0 00:24:25.315 08:24:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.315 08:24:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.315 08:24:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.315 08:24:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:25.315 Found net devices under 0000:af:00.1: cvl_0_1 00:24:25.315 08:24:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:25.315 08:24:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:25.315 08:24:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.315 08:24:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.315 08:24:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:25.315 08:24:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.315 08:24:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.315 08:24:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:25.315 08:24:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.315 08:24:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.315 08:24:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:25.315 08:24:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:25.315 08:24:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.315 08:24:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.315 08:24:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.315 08:24:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.315 08:24:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:25.315 08:24:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.315 08:24:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.315 08:24:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.315 08:24:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:25.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:24:25.315 00:24:25.315 --- 10.0.0.2 ping statistics --- 00:24:25.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.315 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:25.315 08:24:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:24:25.315 00:24:25.315 --- 10.0.0.1 ping statistics --- 00:24:25.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.315 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:25.315 08:24:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.315 08:24:58 -- nvmf/common.sh@410 -- # return 0 00:24:25.315 08:24:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:25.315 08:24:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.315 08:24:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:25.315 08:24:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.315 08:24:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:25.315 08:24:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:25.315 08:24:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:25.315 08:24:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:25.315 08:24:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:25.315 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.315 08:24:58 -- nvmf/common.sh@469 -- # nvmfpid=2359533 00:24:25.315 08:24:58 -- nvmf/common.sh@470 -- # waitforlisten 2359533 00:24:25.315 08:24:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:25.315 08:24:58 -- common/autotest_common.sh@817 -- # '[' -z 2359533 ']' 00:24:25.315 08:24:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.315 08:24:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:25.315 08:24:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.315 08:24:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:25.315 08:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.315 [2024-02-13 08:24:58.839927] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:25.315 [2024-02-13 08:24:58.839973] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.315 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.315 [2024-02-13 08:24:58.904598] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.315 [2024-02-13 08:24:58.975737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:25.315 [2024-02-13 08:24:58.975864] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.315 [2024-02-13 08:24:58.975872] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.315 [2024-02-13 08:24:58.975878] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.316 [2024-02-13 08:24:58.976001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.316 [2024-02-13 08:24:58.976091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.316 [2024-02-13 08:24:58.976199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.316 [2024-02-13 08:24:58.976200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:26.303 08:24:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:26.303 08:24:59 -- common/autotest_common.sh@850 -- # return 0 00:24:26.303 08:24:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:26.303 08:24:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:26.303 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 08:24:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.303 08:24:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.303 08:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.303 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 [2024-02-13 08:24:59.668815] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.303 08:24:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.303 08:24:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:26.303 08:24:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:26.303 08:24:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:26.303 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 08:24:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:26.303 08:24:59 -- target/shutdown.sh@28 -- # cat 00:24:26.303 08:24:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:26.303 08:24:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.303 08:24:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.303 Malloc1 00:24:26.303 [2024-02-13 08:24:59.760224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.303 Malloc2 00:24:26.303 Malloc3 00:24:26.303 Malloc4 00:24:26.303 Malloc5 00:24:26.303 Malloc6 00:24:26.303 Malloc7 00:24:26.589 Malloc8 00:24:26.589 Malloc9 00:24:26.589 Malloc10 00:24:26.589 08:25:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.589 08:25:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:26.589 08:25:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:26.589 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.589 08:25:00 -- target/shutdown.sh@124 -- # perfpid=2359803 00:24:26.589 08:25:00 -- target/shutdown.sh@125 -- # waitforlisten 2359803 /var/tmp/bdevperf.sock 00:24:26.589 08:25:00 -- common/autotest_common.sh@817 -- # '[' -z 2359803 ']' 00:24:26.589 08:25:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.589 08:25:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:26.589 08:25:00 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:26.589 08:25:00 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:26.589 08:25:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.589 08:25:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:26.589 08:25:00 -- nvmf/common.sh@520 -- # config=() 00:24:26.589 08:25:00 -- common/autotest_common.sh@10 -- # set +x 00:24:26.589 08:25:00 -- nvmf/common.sh@520 -- # local subsystem config 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 [2024-02-13 08:25:00.232530] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:26.589 [2024-02-13 08:25:00.232581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359803 ] 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.589 "adrfam": "ipv4", 00:24:26.589 "trsvcid": "$NVMF_PORT", 00:24:26.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.589 "hdgst": ${hdgst:-false}, 00:24:26.589 "ddgst": ${ddgst:-false} 00:24:26.589 }, 00:24:26.589 "method": "bdev_nvme_attach_controller" 00:24:26.589 } 00:24:26.589 EOF 00:24:26.589 )") 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.589 08:25:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:26.589 08:25:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:26.589 { 00:24:26.589 "params": { 00:24:26.589 "name": "Nvme$subsystem", 00:24:26.589 "trtype": "$TEST_TRANSPORT", 00:24:26.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "$NVMF_PORT", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.590 "hdgst": ${hdgst:-false}, 00:24:26.590 "ddgst": ${ddgst:-false} 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 } 00:24:26.590 EOF 00:24:26.590 )") 00:24:26.590 08:25:00 -- nvmf/common.sh@542 -- # cat 00:24:26.590 08:25:00 -- nvmf/common.sh@544 -- # jq . 00:24:26.590 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.590 08:25:00 -- nvmf/common.sh@545 -- # IFS=, 00:24:26.590 08:25:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme1", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme2", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme3", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme4", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme5", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme6", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme7", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme8", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme9", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 },{ 00:24:26.590 "params": { 00:24:26.590 "name": "Nvme10", 00:24:26.590 "trtype": "tcp", 00:24:26.590 "traddr": "10.0.0.2", 00:24:26.590 "adrfam": "ipv4", 00:24:26.590 "trsvcid": "4420", 00:24:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:26.590 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:26.590 "hdgst": false, 00:24:26.590 "ddgst": false 00:24:26.590 }, 00:24:26.590 "method": "bdev_nvme_attach_controller" 00:24:26.590 }' 00:24:26.850 [2024-02-13 08:25:00.295890] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.850 [2024-02-13 08:25:00.365591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.850 [2024-02-13 08:25:00.365651] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:24:28.758 Running I/O for 10 seconds... 00:24:28.758 08:25:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:28.758 08:25:02 -- common/autotest_common.sh@850 -- # return 0 00:24:28.758 08:25:02 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:28.758 08:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.758 08:25:02 -- common/autotest_common.sh@10 -- # set +x 00:24:28.758 08:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.758 08:25:02 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.758 08:25:02 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:28.758 08:25:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:28.758 08:25:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:28.758 08:25:02 -- target/shutdown.sh@57 -- # local ret=1 00:24:28.758 08:25:02 -- target/shutdown.sh@58 -- # local i 00:24:28.758 08:25:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:28.758 08:25:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:28.758 08:25:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:28.758 08:25:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:28.758 08:25:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.758 08:25:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.029 08:25:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.029 08:25:02 -- target/shutdown.sh@60 -- # read_io_count=129 00:24:29.029 08:25:02 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:24:29.029 08:25:02 -- target/shutdown.sh@64 -- # ret=0 00:24:29.029 08:25:02 -- target/shutdown.sh@65 -- # break 00:24:29.029 08:25:02 -- target/shutdown.sh@69 -- # return 0 00:24:29.029 08:25:02 -- target/shutdown.sh@134 -- # killprocess 2359533 00:24:29.029 08:25:02 -- common/autotest_common.sh@924 -- # '[' -z 2359533 ']' 00:24:29.029 08:25:02 -- common/autotest_common.sh@928 -- # kill -0 2359533 00:24:29.029 08:25:02 -- common/autotest_common.sh@929 -- # uname 00:24:29.029 08:25:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:29.029 08:25:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2359533 00:24:29.029 08:25:02 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:29.029 08:25:02 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:29.029 08:25:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2359533' 00:24:29.029 killing process with pid 2359533 00:24:29.029 08:25:02 -- common/autotest_common.sh@943 -- # kill 2359533 00:24:29.029 08:25:02 -- common/autotest_common.sh@948 -- # wait 2359533 00:24:29.029 [2024-02-13 08:25:02.517357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.029 [2024-02-13 08:25:02.517623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.517826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10309a0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.519288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10332b0 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.030 [2024-02-13 08:25:02.520690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030e30 is same with the state(5) to be set 00:24:29.031 [2024-02-13 08:25:02.520861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.520986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.520992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.031 [2024-02-13 08:25:02.521288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.031 [2024-02-13 08:25:02.521296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.032 [2024-02-13 08:25:02.521829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.032 [2024-02-13 08:25:02.521836] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2fbd140 is same with the state(5) to be set 00:24:29.032 [2024-02-13 08:25:02.522190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.032 [2024-02-13 08:25:02.522214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522224] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2fbd140 was disconnected and freed. reset controller. 00:24:29.033 [2024-02-13 08:25:02.522227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522303] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.033 [2024-02-13 08:25:02.522307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.522603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10312c0 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.523623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:29.033 [2024-02-13 08:25:02.523680] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48cf0 (9): Bad file descriptor 00:24:29.033 [2024-02-13 08:25:02.523956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.523979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.523986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.523993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.523999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.033 [2024-02-13 08:25:02.524064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.524349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031750 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.034 [2024-02-13 08:25:02.525139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1031be0 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.525402] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.035 [2024-02-13 08:25:02.525776] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.035 [2024-02-13 08:25:02.526608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.035 [2024-02-13 08:25:02.526840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032070 is same with the state(5) to be set 00:24:29.036 [2024-02-13 08:25:02.526995] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.036 [2024-02-13 08:25:02.527289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.036 [2024-02-13 08:25:02.527686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-02-13 08:25:02.527692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.037 [2024-02-13 08:25:02.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-02-13 08:25:02.528240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528249] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27668c0 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528300] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27668c0 was disconnected and freed. reset controller. 00:24:29.038 [2024-02-13 08:25:02.528336] nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:29.038 [2024-02-13 08:25:02.528473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528534] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6dce0 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528610] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d250 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528700] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0310 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528779] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97630 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528854] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.528881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.038 [2024-02-13 08:25:02.528931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.038 [2024-02-13 08:25:02.528936] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6add0 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.038 [2024-02-13 08:25:02.529446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.529981] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:29.039 [2024-02-13 08:25:02.530004] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d250 (9): Bad file descriptor 00:24:29.039 [2024-02-13 08:25:02.538065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032500 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.538896] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6dce0 (9): Bad file descriptor 00:24:29.039 [2024-02-13 08:25:02.538942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.039 [2024-02-13 08:25:02.538953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.538960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.039 [2024-02-13 08:25:02.538966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.538973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.039 [2024-02-13 08:25:02.538979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.538986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.039 [2024-02-13 08:25:02.538992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.538998] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e536b0 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539016] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db0310 (9): Bad file descriptor 00:24:29.039 [2024-02-13 08:25:02.539036] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97630 (9): Bad file descriptor 00:24:29.039 [2024-02-13 08:25:02.539049] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:24:29.039 [2024-02-13 08:25:02.539049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.039 [2024-02-13 08:25:02.539076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.539083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-02-13 08:25:02.539090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:29.039 the state(5) to be set 00:24:29.039 [2024-02-13 08:25:02.539098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with [2024-02-13 08:25:02.539098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:29.039 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.039 [2024-02-13 08:25:02.539108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.040 [2024-02-13 08:25:02.539114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.040 [2024-02-13 08:25:02.539128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539138] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89d80 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539152] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6add0 (9): Bad file descriptor 00:24:29.040 [2024-02-13 08:25:02.539154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:12[2024-02-13 08:25:02.539220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with [2024-02-13 08:25:02.539230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:29.040 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128[2024-02-13 08:25:02.539244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-13 08:25:02.539253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-02-13 08:25:02.539393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.040 [2024-02-13 08:25:02.539421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.040 [2024-02-13 08:25:02.539428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.040 [2024-02-13 08:25:02.539435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:12[2024-02-13 08:25:02.539485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032990 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.539493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.539887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.539895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.540174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032e20 is same with the state(5) to be set 00:24:29.041 [2024-02-13 08:25:02.547131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.547143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.547151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.547160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.041 [2024-02-13 08:25:02.547167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.041 [2024-02-13 08:25:02.547175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.547425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547432] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2aac310 is same with the state(5) to be set 00:24:29.042 [2024-02-13 08:25:02.547487] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2aac310 was disconnected and freed. reset controller. 00:24:29.042 [2024-02-13 08:25:02.547608] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2e1a3f0 was disconnected and freed. reset controller. 00:24:29.042 [2024-02-13 08:25:02.547675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:29.042 [2024-02-13 08:25:02.547684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.547691] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48cf0 is same with the state(5) to be set 00:24:29.042 [2024-02-13 08:25:02.547709] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d250 (9): Bad file descriptor 00:24:29.042 [2024-02-13 08:25:02.547723] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48cf0 (9): Bad file descriptor 00:24:29.042 [2024-02-13 08:25:02.549461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:29.042 [2024-02-13 08:25:02.549484] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e536b0 (9): Bad file descriptor 00:24:29.042 [2024-02-13 08:25:02.549524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.042 [2024-02-13 08:25:02.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.042 [2024-02-13 08:25:02.549547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.042 [2024-02-13 08:25:02.549561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.042 [2024-02-13 08:25:02.549575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549581] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d978e0 is same with the state(5) to be set 00:24:29.042 [2024-02-13 08:25:02.549599] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e89d80 (9): Bad file descriptor 00:24:29.042 [2024-02-13 08:25:02.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.042 [2024-02-13 08:25:02.549879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.042 [2024-02-13 08:25:02.549887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.549989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.549995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550145] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c4f080 is same with the state(5) to be set 00:24:29.043 [2024-02-13 08:25:02.550198] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2c4f080 was disconnected and freed. reset controller. 00:24:29.043 [2024-02-13 08:25:02.550236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:29.043 [2024-02-13 08:25:02.550246] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d978e0 (9): Bad file descriptor 00:24:29.043 [2024-02-13 08:25:02.550260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:29.043 [2024-02-13 08:25:02.550267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:29.043 [2024-02-13 08:25:02.550275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:29.043 [2024-02-13 08:25:02.550286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:29.043 [2024-02-13 08:25:02.550292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:29.043 [2024-02-13 08:25:02.550298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:29.043 [2024-02-13 08:25:02.550334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.043 [2024-02-13 08:25:02.550474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.043 [2024-02-13 08:25:02.550482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.550988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.550996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.551003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.551011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.044 [2024-02-13 08:25:02.551017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.044 [2024-02-13 08:25:02.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.551243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.551250] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc420 is same with the state(5) to be set 00:24:29.045 [2024-02-13 08:25:02.552520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.045 [2024-02-13 08:25:02.552940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.045 [2024-02-13 08:25:02.552948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.552959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.552967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.552978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.552987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.552999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.046 [2024-02-13 08:25:02.553675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.046 [2024-02-13 08:25:02.553684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.553695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.553714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.553722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.553732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.553752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.553760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.553770] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2660 is same with the state(5) to be set 00:24:29.047 [2024-02-13 08:25:02.555016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.047 [2024-02-13 08:25:02.555666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.047 [2024-02-13 08:25:02.555677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.555984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.555993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.556261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.048 [2024-02-13 08:25:02.556271] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421080 is same with the state(5) to be set 00:24:29.048 [2024-02-13 08:25:02.557542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.048 [2024-02-13 08:25:02.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.557982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.557991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.049 [2024-02-13 08:25:02.558282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.049 [2024-02-13 08:25:02.558293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.558783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.558792] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c3bd0 is same with the state(5) to be set 00:24:29.050 [2024-02-13 08:25:02.560048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.050 [2024-02-13 08:25:02.560275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.050 [2024-02-13 08:25:02.560284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.560987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.560998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.561006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.051 [2024-02-13 08:25:02.561016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.051 [2024-02-13 08:25:02.561025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.052 [2024-02-13 08:25:02.561297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.052 [2024-02-13 08:25:02.561307] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2909600 is same with the state(5) to be set 00:24:29.052 [2024-02-13 08:25:02.563301] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.052 [2024-02-13 08:25:02.563323] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.052 [2024-02-13 08:25:02.563331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.052 [2024-02-13 08:25:02.563341] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:29.052 [2024-02-13 08:25:02.563349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:29.052 [2024-02-13 08:25:02.563763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-02-13 08:25:02.564071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-02-13 08:25:02.564082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e536b0 with addr=10.0.0.2, port=4420 00:24:29.052 [2024-02-13 08:25:02.564090] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e536b0 is same with the state(5) to be set 00:24:29.052 [2024-02-13 08:25:02.564135] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.052 [2024-02-13 08:25:02.564147] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.052 [2024-02-13 08:25:02.564158] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.052 [2024-02-13 08:25:02.564169] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e536b0 (9): Bad file descriptor 00:24:29.052 [2024-02-13 08:25:02.564241] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:29.052 [2024-02-13 08:25:02.564252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:29.052 [2024-02-13 08:25:02.564262] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:29.052 task offset: 18944 on job bdev=Nvme10n1 fails 00:24:29.052 00:24:29.052 Latency(us) 00:24:29.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.052 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme1n1 ended in about 0.38 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme1n1 : 0.38 328.78 20.55 167.00 0.00 127747.57 73899.64 121834.54 00:24:29.052 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme2n1 ended in about 0.39 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme2n1 : 0.39 326.64 20.42 165.91 0.00 126515.03 80390.83 105356.92 00:24:29.052 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme3n1 ended in about 0.39 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme3n1 : 0.39 324.55 20.28 164.85 0.00 125127.44 70404.39 109351.50 00:24:29.052 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme4n1 ended in about 0.39 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme4n1 : 0.39 322.46 20.15 163.79 0.00 123844.97 73899.64 103858.96 00:24:29.052 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme5n1 ended in about 0.36 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme5n1 : 0.36 454.38 28.40 177.32 0.00 93075.08 9050.21 95869.81 00:24:29.052 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme6n1 ended in about 0.39 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme6n1 : 0.39 320.42 20.03 162.75 0.00 120517.91 77894.22 97367.77 00:24:29.052 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme7n1 ended in about 0.38 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme7n1 : 0.38 331.88 20.74 168.57 0.00 113787.58 56173.71 98865.74 00:24:29.052 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme8n1 ended in about 0.39 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme8n1 : 0.39 410.90 25.68 71.02 0.00 114033.13 13419.28 99864.38 00:24:29.052 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme9n1 : 0.37 513.59 32.10 0.00 0.00 103848.12 11359.57 95370.48 00:24:29.052 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:29.052 Job: Nvme10n1 ended in about 0.35 seconds with error 00:24:29.052 Verification LBA range: start 0x0 length 0x400 00:24:29.052 Nvme10n1 : 0.35 355.21 22.20 180.42 0.00 98921.17 4868.39 102860.31 00:24:29.052 =================================================================================================================== 00:24:29.052 Total : 3688.80 230.55 1421.64 0.00 114316.96 4868.39 121834.54 00:24:29.053 [2024-02-13 08:25:02.588251] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:29.053 [2024-02-13 08:25:02.588728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.589141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.589152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d978e0 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.589162] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d978e0 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.589446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.589817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.589827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d97630 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.589834] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97630 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.590163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.590511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.590521] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db0310 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.590528] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db0310 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.590855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.591187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.591196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6dce0 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.591204] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6dce0 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.592645] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:24:29.053 [2024-02-13 08:25:02.592676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:29.053 [2024-02-13 08:25:02.592695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:29.053 [2024-02-13 08:25:02.593108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.593404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.593415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6add0 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.593423] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6add0 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.593751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.594103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.594113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e8e610 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.594120] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8e610 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.594454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.594804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.594814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e89d80 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.594821] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89d80 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.594833] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d978e0 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.594844] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d97630 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.594853] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db0310 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.594862] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6dce0 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.594870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.594876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.594885] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:29.053 [2024-02-13 08:25:02.594922] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.053 [2024-02-13 08:25:02.594933] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.053 [2024-02-13 08:25:02.594942] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.053 [2024-02-13 08:25:02.594953] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.053 [2024-02-13 08:25:02.594962] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.053 [2024-02-13 08:25:02.595041] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.595432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.595665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.595675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6d250 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.595683] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6d250 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.595904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.596260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-02-13 08:25:02.596271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e48cf0 with addr=10.0.0.2, port=4420 00:24:29.053 [2024-02-13 08:25:02.596278] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48cf0 is same with the state(5) to be set 00:24:29.053 [2024-02-13 08:25:02.596288] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6add0 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.596296] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e8e610 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.596305] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e89d80 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.596313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596319] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596342] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596384] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596459] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596467] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596472] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596478] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596485] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d250 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.596493] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48cf0 (9): Bad file descriptor 00:24:29.053 [2024-02-13 08:25:02.596501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596506] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596521] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596526] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596544] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:29.053 [2024-02-13 08:25:02.596550] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:29.053 [2024-02-13 08:25:02.596556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:29.053 [2024-02-13 08:25:02.596584] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596591] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596597] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.053 [2024-02-13 08:25:02.596602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:29.054 [2024-02-13 08:25:02.596608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:29.054 [2024-02-13 08:25:02.596614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:29.054 [2024-02-13 08:25:02.596623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:29.054 [2024-02-13 08:25:02.596630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:29.054 [2024-02-13 08:25:02.596636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:29.054 [2024-02-13 08:25:02.596678] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.054 [2024-02-13 08:25:02.596685] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.313 08:25:02 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:29.313 08:25:02 -- target/shutdown.sh@138 -- # sleep 1 00:24:30.694 08:25:03 -- target/shutdown.sh@141 -- # kill -9 2359803 00:24:30.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2359803) - No such process 00:24:30.694 08:25:03 -- target/shutdown.sh@141 -- # true 00:24:30.694 08:25:03 -- target/shutdown.sh@143 -- # stoptarget 00:24:30.694 08:25:03 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:30.694 08:25:03 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:30.694 08:25:03 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:30.694 08:25:03 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:30.694 08:25:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.694 08:25:03 -- nvmf/common.sh@116 -- # sync 00:24:30.694 08:25:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.694 08:25:04 -- nvmf/common.sh@119 -- # set +e 00:24:30.694 08:25:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.694 08:25:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.694 rmmod nvme_tcp 00:24:30.694 rmmod nvme_fabrics 00:24:30.694 rmmod nvme_keyring 00:24:30.694 08:25:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.694 08:25:04 -- nvmf/common.sh@123 -- # set -e 00:24:30.694 08:25:04 -- nvmf/common.sh@124 -- # return 0 00:24:30.694 08:25:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:30.694 08:25:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.694 08:25:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.694 08:25:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.694 08:25:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.694 08:25:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.694 08:25:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.694 08:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.694 08:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.604 08:25:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:32.604 00:24:32.604 real 0m7.678s 00:24:32.604 user 0m18.617s 00:24:32.604 sys 0m1.238s 00:24:32.604 08:25:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:32.604 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 ************************************ 00:24:32.604 END TEST nvmf_shutdown_tc3 00:24:32.604 ************************************ 00:24:32.604 08:25:06 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:32.604 00:24:32.604 real 0m30.733s 00:24:32.604 user 1m15.900s 00:24:32.604 sys 0m8.333s 00:24:32.604 08:25:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:32.604 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 ************************************ 00:24:32.604 END TEST nvmf_shutdown 00:24:32.604 ************************************ 00:24:32.604 08:25:06 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:24:32.604 08:25:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.604 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 08:25:06 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:24:32.604 08:25:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:32.604 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 08:25:06 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:24:32.604 08:25:06 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:32.604 08:25:06 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:32.604 08:25:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:32.604 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:32.604 ************************************ 00:24:32.604 START TEST nvmf_multicontroller 00:24:32.604 ************************************ 00:24:32.604 08:25:06 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:32.864 * Looking for test storage... 00:24:32.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.864 08:25:06 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.864 08:25:06 -- nvmf/common.sh@7 -- # uname -s 00:24:32.864 08:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.864 08:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.864 08:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.864 08:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.864 08:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.864 08:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.864 08:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.864 08:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.864 08:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.864 08:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.864 08:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:32.864 08:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:32.864 08:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.864 08:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.864 08:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.864 08:25:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.864 08:25:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.864 08:25:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.864 08:25:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.864 08:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.864 08:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.864 08:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.864 08:25:06 -- paths/export.sh@5 -- # export PATH 00:24:32.864 08:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.864 08:25:06 -- nvmf/common.sh@46 -- # : 0 00:24:32.864 08:25:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:32.864 08:25:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:32.864 08:25:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:32.864 08:25:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.864 08:25:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.864 08:25:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:32.864 08:25:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:32.864 08:25:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:32.864 08:25:06 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.864 08:25:06 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.864 08:25:06 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:32.864 08:25:06 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:32.864 08:25:06 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.864 08:25:06 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:32.864 08:25:06 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:32.864 08:25:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:32.864 08:25:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.864 08:25:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:32.864 08:25:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:32.864 08:25:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:32.864 08:25:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.865 08:25:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.865 08:25:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.865 08:25:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:32.865 08:25:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:32.865 08:25:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:32.865 08:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:39.441 08:25:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:39.441 08:25:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:39.441 08:25:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:39.441 08:25:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:39.441 08:25:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:39.441 08:25:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:39.441 08:25:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:39.441 08:25:11 -- nvmf/common.sh@294 -- # net_devs=() 00:24:39.441 08:25:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:39.441 08:25:11 -- nvmf/common.sh@295 -- # e810=() 00:24:39.441 08:25:11 -- nvmf/common.sh@295 -- # local -ga e810 00:24:39.441 08:25:11 -- nvmf/common.sh@296 -- # x722=() 00:24:39.441 08:25:11 -- nvmf/common.sh@296 -- # local -ga x722 00:24:39.441 08:25:11 -- nvmf/common.sh@297 -- # mlx=() 00:24:39.441 08:25:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:39.441 08:25:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.441 08:25:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:39.441 08:25:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:39.441 08:25:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:39.441 08:25:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:39.441 08:25:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:39.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:39.441 08:25:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:39.441 08:25:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:39.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:39.441 08:25:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:39.441 08:25:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:39.441 08:25:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:39.442 08:25:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.442 08:25:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:39.442 08:25:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.442 08:25:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:39.442 Found net devices under 0000:af:00.0: cvl_0_0 00:24:39.442 08:25:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.442 08:25:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:39.442 08:25:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.442 08:25:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:39.442 08:25:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.442 08:25:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:39.442 Found net devices under 0000:af:00.1: cvl_0_1 00:24:39.442 08:25:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.442 08:25:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:39.442 08:25:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:39.442 08:25:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:39.442 08:25:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:39.442 08:25:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:39.442 08:25:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.442 08:25:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.442 08:25:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.442 08:25:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:39.442 08:25:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.442 08:25:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.442 08:25:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:39.442 08:25:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.442 08:25:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.442 08:25:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:39.442 08:25:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:39.442 08:25:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.442 08:25:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.442 08:25:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.442 08:25:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.442 08:25:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:39.442 08:25:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.442 08:25:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.442 08:25:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.442 08:25:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:39.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:39.442 00:24:39.442 --- 10.0.0.2 ping statistics --- 00:24:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.442 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:39.442 08:25:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:24:39.442 00:24:39.442 --- 10.0.0.1 ping statistics --- 00:24:39.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.442 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:39.442 08:25:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.442 08:25:12 -- nvmf/common.sh@410 -- # return 0 00:24:39.442 08:25:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:39.442 08:25:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.442 08:25:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:39.442 08:25:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:39.442 08:25:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.442 08:25:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:39.442 08:25:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:39.442 08:25:12 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:39.442 08:25:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:39.442 08:25:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:39.442 08:25:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.442 08:25:12 -- nvmf/common.sh@469 -- # nvmfpid=2364152 00:24:39.442 08:25:12 -- nvmf/common.sh@470 -- # waitforlisten 2364152 00:24:39.442 08:25:12 -- common/autotest_common.sh@817 -- # '[' -z 2364152 ']' 00:24:39.442 08:25:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.442 08:25:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.442 08:25:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.442 08:25:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:39.442 08:25:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.442 08:25:12 -- common/autotest_common.sh@10 -- # set +x 00:24:39.442 [2024-02-13 08:25:12.254612] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:39.442 [2024-02-13 08:25:12.254663] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.442 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.442 [2024-02-13 08:25:12.317738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:39.442 [2024-02-13 08:25:12.392575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:39.442 [2024-02-13 08:25:12.392696] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.442 [2024-02-13 08:25:12.392704] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.442 [2024-02-13 08:25:12.392711] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.442 [2024-02-13 08:25:12.392825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.442 [2024-02-13 08:25:12.392936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.442 [2024-02-13 08:25:12.392937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.442 08:25:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:39.442 08:25:13 -- common/autotest_common.sh@850 -- # return 0 00:24:39.442 08:25:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:39.442 08:25:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:39.442 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.442 08:25:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.442 08:25:13 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.442 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.442 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.442 [2024-02-13 08:25:13.088707] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.442 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.442 08:25:13 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:39.442 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.442 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 Malloc0 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 [2024-02-13 08:25:13.152459] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 [2024-02-13 08:25:13.160396] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 Malloc1 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:39.702 08:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.702 08:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.702 08:25:13 -- host/multicontroller.sh@44 -- # bdevperf_pid=2364400 00:24:39.702 08:25:13 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:39.702 08:25:13 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.702 08:25:13 -- host/multicontroller.sh@47 -- # waitforlisten 2364400 /var/tmp/bdevperf.sock 00:24:39.702 08:25:13 -- common/autotest_common.sh@817 -- # '[' -z 2364400 ']' 00:24:39.702 08:25:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.702 08:25:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.702 08:25:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.702 08:25:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.702 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:40.639 08:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.639 08:25:14 -- common/autotest_common.sh@850 -- # return 0 00:24:40.639 08:25:14 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:40.639 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.639 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.639 NVMe0n1 00:24:40.639 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.639 08:25:14 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.639 08:25:14 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:40.639 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.639 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.639 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.639 1 00:24:40.639 08:25:14 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.639 08:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.639 08:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.639 08:25:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.639 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.639 08:25:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.639 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.639 08:25:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.639 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.639 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.640 request: 00:24:40.640 { 00:24:40.640 "name": "NVMe0", 00:24:40.640 "trtype": "tcp", 00:24:40.640 "traddr": "10.0.0.2", 00:24:40.640 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:40.640 "hostaddr": "10.0.0.2", 00:24:40.640 "hostsvcid": "60000", 00:24:40.640 "adrfam": "ipv4", 00:24:40.640 "trsvcid": "4420", 00:24:40.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.640 "method": "bdev_nvme_attach_controller", 00:24:40.640 "req_id": 1 00:24:40.640 } 00:24:40.640 Got JSON-RPC error response 00:24:40.640 response: 00:24:40.640 { 00:24:40.640 "code": -114, 00:24:40.640 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.640 } 00:24:40.640 08:25:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # es=1 00:24:40.640 08:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.640 08:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.640 08:25:14 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.640 08:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.640 08:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.640 08:25:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.640 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.640 request: 00:24:40.640 { 00:24:40.640 "name": "NVMe0", 00:24:40.640 "trtype": "tcp", 00:24:40.640 "traddr": "10.0.0.2", 00:24:40.640 "hostaddr": "10.0.0.2", 00:24:40.640 "hostsvcid": "60000", 00:24:40.640 "adrfam": "ipv4", 00:24:40.640 "trsvcid": "4420", 00:24:40.640 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:40.640 "method": "bdev_nvme_attach_controller", 00:24:40.640 "req_id": 1 00:24:40.640 } 00:24:40.640 Got JSON-RPC error response 00:24:40.640 response: 00:24:40.640 { 00:24:40.640 "code": -114, 00:24:40.640 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.640 } 00:24:40.640 08:25:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # es=1 00:24:40.640 08:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.640 08:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.640 08:25:14 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.640 08:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.640 request: 00:24:40.640 { 00:24:40.640 "name": "NVMe0", 00:24:40.640 "trtype": "tcp", 00:24:40.640 "traddr": "10.0.0.2", 00:24:40.640 "hostaddr": "10.0.0.2", 00:24:40.640 "hostsvcid": "60000", 00:24:40.640 "adrfam": "ipv4", 00:24:40.640 "trsvcid": "4420", 00:24:40.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.640 "multipath": "disable", 00:24:40.640 "method": "bdev_nvme_attach_controller", 00:24:40.640 "req_id": 1 00:24:40.640 } 00:24:40.640 Got JSON-RPC error response 00:24:40.640 response: 00:24:40.640 { 00:24:40.640 "code": -114, 00:24:40.640 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:40.640 } 00:24:40.640 08:25:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # es=1 00:24:40.640 08:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.640 08:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.640 08:25:14 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.640 08:25:14 -- common/autotest_common.sh@638 -- # local es=0 00:24:40.640 08:25:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.640 08:25:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:40.640 08:25:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.640 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.640 request: 00:24:40.640 { 00:24:40.640 "name": "NVMe0", 00:24:40.640 "trtype": "tcp", 00:24:40.640 "traddr": "10.0.0.2", 00:24:40.640 "hostaddr": "10.0.0.2", 00:24:40.640 "hostsvcid": "60000", 00:24:40.640 "adrfam": "ipv4", 00:24:40.640 "trsvcid": "4420", 00:24:40.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.640 "multipath": "failover", 00:24:40.640 "method": "bdev_nvme_attach_controller", 00:24:40.640 "req_id": 1 00:24:40.640 } 00:24:40.640 Got JSON-RPC error response 00:24:40.640 response: 00:24:40.640 { 00:24:40.640 "code": -114, 00:24:40.640 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.640 } 00:24:40.640 08:25:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@641 -- # es=1 00:24:40.640 08:25:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:40.640 08:25:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:40.640 08:25:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:40.640 08:25:14 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.640 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.640 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.900 00:24:40.900 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.900 08:25:14 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.900 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.900 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.900 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.900 08:25:14 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:40.900 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.900 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.900 00:24:40.900 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.900 08:25:14 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.900 08:25:14 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:40.900 08:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.900 08:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:40.900 08:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.900 08:25:14 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:40.900 08:25:14 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.281 0 00:24:42.281 08:25:15 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:42.281 08:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.281 08:25:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.281 08:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.281 08:25:15 -- host/multicontroller.sh@100 -- # killprocess 2364400 00:24:42.281 08:25:15 -- common/autotest_common.sh@924 -- # '[' -z 2364400 ']' 00:24:42.281 08:25:15 -- common/autotest_common.sh@928 -- # kill -0 2364400 00:24:42.281 08:25:15 -- common/autotest_common.sh@929 -- # uname 00:24:42.281 08:25:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:42.281 08:25:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2364400 00:24:42.281 08:25:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:42.281 08:25:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:42.281 08:25:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2364400' 00:24:42.281 killing process with pid 2364400 00:24:42.281 08:25:15 -- common/autotest_common.sh@943 -- # kill 2364400 00:24:42.281 08:25:15 -- common/autotest_common.sh@948 -- # wait 2364400 00:24:42.281 08:25:15 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.281 08:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.281 08:25:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.281 08:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.281 08:25:15 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:42.281 08:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.281 08:25:15 -- common/autotest_common.sh@10 -- # set +x 00:24:42.541 08:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.541 08:25:15 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:42.541 08:25:15 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.541 08:25:15 -- common/autotest_common.sh@1595 -- # read -r file 00:24:42.541 08:25:15 -- common/autotest_common.sh@1594 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:42.541 08:25:15 -- common/autotest_common.sh@1594 -- # sort -u 00:24:42.541 08:25:15 -- common/autotest_common.sh@1596 -- # cat 00:24:42.541 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.541 [2024-02-13 08:25:13.256315] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:42.541 [2024-02-13 08:25:13.256365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364400 ] 00:24:42.541 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.541 [2024-02-13 08:25:13.314450] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.541 [2024-02-13 08:25:13.391517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.541 [2024-02-13 08:25:14.559611] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 1d34f643-ce77-485b-9639-90b4fa8f3b3b already exists 00:24:42.541 [2024-02-13 08:25:14.559642] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:1d34f643-ce77-485b-9639-90b4fa8f3b3b alias for bdev NVMe1n1 00:24:42.541 [2024-02-13 08:25:14.559654] bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:42.541 Running I/O for 1 seconds... 00:24:42.541 00:24:42.541 Latency(us) 00:24:42.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.541 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:42.541 NVMe0n1 : 1.00 23921.85 93.44 0.00 0.00 5334.06 4618.73 18974.23 00:24:42.541 =================================================================================================================== 00:24:42.541 Total : 23921.85 93.44 0.00 0.00 5334.06 4618.73 18974.23 00:24:42.541 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.541 00:24:42.541 Latency(us) 00:24:42.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.541 =================================================================================================================== 00:24:42.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.541 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.541 08:25:15 -- common/autotest_common.sh@1601 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.541 08:25:15 -- common/autotest_common.sh@1595 -- # read -r file 00:24:42.541 08:25:15 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:42.541 08:25:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:42.541 08:25:15 -- nvmf/common.sh@116 -- # sync 00:24:42.541 08:25:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:42.541 08:25:15 -- nvmf/common.sh@119 -- # set +e 00:24:42.541 08:25:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:42.541 08:25:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:42.541 rmmod nvme_tcp 00:24:42.541 rmmod nvme_fabrics 00:24:42.541 rmmod nvme_keyring 00:24:42.541 08:25:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:42.541 08:25:16 -- nvmf/common.sh@123 -- # set -e 00:24:42.541 08:25:16 -- nvmf/common.sh@124 -- # return 0 00:24:42.541 08:25:16 -- nvmf/common.sh@477 -- # '[' -n 2364152 ']' 00:24:42.541 08:25:16 -- nvmf/common.sh@478 -- # killprocess 2364152 00:24:42.541 08:25:16 -- common/autotest_common.sh@924 -- # '[' -z 2364152 ']' 00:24:42.541 08:25:16 -- common/autotest_common.sh@928 -- # kill -0 2364152 00:24:42.541 08:25:16 -- common/autotest_common.sh@929 -- # uname 00:24:42.541 08:25:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:42.542 08:25:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2364152 00:24:42.542 08:25:16 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:24:42.542 08:25:16 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:24:42.542 08:25:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2364152' 00:24:42.542 killing process with pid 2364152 00:24:42.542 08:25:16 -- common/autotest_common.sh@943 -- # kill 2364152 00:24:42.542 08:25:16 -- common/autotest_common.sh@948 -- # wait 2364152 00:24:42.801 08:25:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:42.801 08:25:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:42.801 08:25:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:42.801 08:25:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.801 08:25:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:42.801 08:25:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.801 08:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.801 08:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.340 08:25:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:45.340 00:24:45.340 real 0m12.160s 00:24:45.340 user 0m16.265s 00:24:45.340 sys 0m5.224s 00:24:45.340 08:25:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:45.340 08:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.340 ************************************ 00:24:45.340 END TEST nvmf_multicontroller 00:24:45.340 ************************************ 00:24:45.340 08:25:18 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.340 08:25:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:45.340 08:25:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:45.340 08:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:45.340 ************************************ 00:24:45.340 START TEST nvmf_aer 00:24:45.340 ************************************ 00:24:45.340 08:25:18 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.340 * Looking for test storage... 00:24:45.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.340 08:25:18 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.340 08:25:18 -- nvmf/common.sh@7 -- # uname -s 00:24:45.340 08:25:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.340 08:25:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.340 08:25:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.340 08:25:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.340 08:25:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.340 08:25:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.340 08:25:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.340 08:25:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.340 08:25:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.340 08:25:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.340 08:25:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:45.340 08:25:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:45.340 08:25:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.340 08:25:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.340 08:25:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.340 08:25:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.341 08:25:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.341 08:25:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.341 08:25:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.341 08:25:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.341 08:25:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.341 08:25:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.341 08:25:18 -- paths/export.sh@5 -- # export PATH 00:24:45.341 08:25:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.341 08:25:18 -- nvmf/common.sh@46 -- # : 0 00:24:45.341 08:25:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:45.341 08:25:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:45.341 08:25:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:45.341 08:25:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.341 08:25:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.341 08:25:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:45.341 08:25:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:45.341 08:25:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:45.341 08:25:18 -- host/aer.sh@11 -- # nvmftestinit 00:24:45.341 08:25:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:45.341 08:25:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.341 08:25:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:45.341 08:25:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:45.341 08:25:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:45.341 08:25:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.341 08:25:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.341 08:25:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.341 08:25:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:45.341 08:25:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:45.341 08:25:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:45.341 08:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:50.661 08:25:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:50.661 08:25:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:50.661 08:25:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:50.661 08:25:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:50.661 08:25:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:50.661 08:25:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:50.661 08:25:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:50.661 08:25:24 -- nvmf/common.sh@294 -- # net_devs=() 00:24:50.661 08:25:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:50.661 08:25:24 -- nvmf/common.sh@295 -- # e810=() 00:24:50.661 08:25:24 -- nvmf/common.sh@295 -- # local -ga e810 00:24:50.661 08:25:24 -- nvmf/common.sh@296 -- # x722=() 00:24:50.661 08:25:24 -- nvmf/common.sh@296 -- # local -ga x722 00:24:50.661 08:25:24 -- nvmf/common.sh@297 -- # mlx=() 00:24:50.661 08:25:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:50.661 08:25:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.661 08:25:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:50.661 08:25:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:50.661 08:25:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:50.661 08:25:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:50.661 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:50.661 08:25:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:50.661 08:25:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:50.661 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:50.661 08:25:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:50.661 08:25:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.661 08:25:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.661 08:25:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:50.661 Found net devices under 0000:af:00.0: cvl_0_0 00:24:50.661 08:25:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.661 08:25:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:50.661 08:25:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.661 08:25:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.661 08:25:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:50.661 Found net devices under 0000:af:00.1: cvl_0_1 00:24:50.661 08:25:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.661 08:25:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:50.661 08:25:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:50.661 08:25:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:50.661 08:25:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.661 08:25:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.661 08:25:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.661 08:25:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:50.661 08:25:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.661 08:25:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.661 08:25:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:50.661 08:25:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.661 08:25:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.661 08:25:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:50.661 08:25:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:50.661 08:25:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.661 08:25:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.661 08:25:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.661 08:25:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.661 08:25:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:50.661 08:25:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.921 08:25:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.921 08:25:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.921 08:25:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:50.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:24:50.921 00:24:50.921 --- 10.0.0.2 ping statistics --- 00:24:50.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.921 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:50.921 08:25:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:24:50.921 00:24:50.921 --- 10.0.0.1 ping statistics --- 00:24:50.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.921 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:24:50.921 08:25:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.921 08:25:24 -- nvmf/common.sh@410 -- # return 0 00:24:50.921 08:25:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:50.921 08:25:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.921 08:25:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:50.921 08:25:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:50.921 08:25:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.921 08:25:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:50.921 08:25:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:50.921 08:25:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:50.921 08:25:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:50.921 08:25:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:50.921 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.921 08:25:24 -- nvmf/common.sh@469 -- # nvmfpid=2368665 00:24:50.921 08:25:24 -- nvmf/common.sh@470 -- # waitforlisten 2368665 00:24:50.921 08:25:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.921 08:25:24 -- common/autotest_common.sh@817 -- # '[' -z 2368665 ']' 00:24:50.921 08:25:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.921 08:25:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:50.921 08:25:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.921 08:25:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:50.921 08:25:24 -- common/autotest_common.sh@10 -- # set +x 00:24:50.921 [2024-02-13 08:25:24.484417] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:50.921 [2024-02-13 08:25:24.484461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.921 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.921 [2024-02-13 08:25:24.548657] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.181 [2024-02-13 08:25:24.626246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:51.181 [2024-02-13 08:25:24.626351] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.181 [2024-02-13 08:25:24.626359] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.181 [2024-02-13 08:25:24.626366] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.181 [2024-02-13 08:25:24.626401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.181 [2024-02-13 08:25:24.626416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.181 [2024-02-13 08:25:24.626504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.181 [2024-02-13 08:25:24.626505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.751 08:25:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:51.751 08:25:25 -- common/autotest_common.sh@850 -- # return 0 00:24:51.751 08:25:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:51.751 08:25:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 08:25:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.751 08:25:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 [2024-02-13 08:25:25.330913] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 Malloc0 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 [2024-02-13 08:25:25.382147] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:51.751 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.751 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:51.751 [2024-02-13 08:25:25.389956] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:51.751 [ 00:24:51.751 { 00:24:51.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:51.751 "subtype": "Discovery", 00:24:51.751 "listen_addresses": [], 00:24:51.751 "allow_any_host": true, 00:24:51.751 "hosts": [] 00:24:51.751 }, 00:24:51.751 { 00:24:51.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.751 "subtype": "NVMe", 00:24:51.751 "listen_addresses": [ 00:24:51.751 { 00:24:51.751 "transport": "TCP", 00:24:51.751 "trtype": "TCP", 00:24:51.751 "adrfam": "IPv4", 00:24:51.751 "traddr": "10.0.0.2", 00:24:51.751 "trsvcid": "4420" 00:24:51.751 } 00:24:51.751 ], 00:24:51.751 "allow_any_host": true, 00:24:51.751 "hosts": [], 00:24:51.751 "serial_number": "SPDK00000000000001", 00:24:51.751 "model_number": "SPDK bdev Controller", 00:24:51.751 "max_namespaces": 2, 00:24:51.751 "min_cntlid": 1, 00:24:51.751 "max_cntlid": 65519, 00:24:51.751 "namespaces": [ 00:24:51.751 { 00:24:51.751 "nsid": 1, 00:24:51.751 "bdev_name": "Malloc0", 00:24:51.751 "name": "Malloc0", 00:24:51.751 "nguid": "4D85B5AD64DE4B5C9D42D7FD7E1A9874", 00:24:51.751 "uuid": "4d85b5ad-64de-4b5c-9d42-d7fd7e1a9874" 00:24:51.751 } 00:24:51.751 ] 00:24:51.751 } 00:24:51.751 ] 00:24:51.751 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.751 08:25:25 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:51.751 08:25:25 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:51.751 08:25:25 -- host/aer.sh@33 -- # aerpid=2368775 00:24:51.751 08:25:25 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:51.751 08:25:25 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:51.751 08:25:25 -- common/autotest_common.sh@1242 -- # local i=0 00:24:51.751 08:25:25 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:51.751 08:25:25 -- common/autotest_common.sh@1244 -- # '[' 0 -lt 200 ']' 00:24:51.751 08:25:25 -- common/autotest_common.sh@1245 -- # i=1 00:24:51.751 08:25:25 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:52.010 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.010 08:25:25 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.010 08:25:25 -- common/autotest_common.sh@1244 -- # '[' 1 -lt 200 ']' 00:24:52.010 08:25:25 -- common/autotest_common.sh@1245 -- # i=2 00:24:52.010 08:25:25 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:52.010 08:25:25 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.010 08:25:25 -- common/autotest_common.sh@1244 -- # '[' 2 -lt 200 ']' 00:24:52.010 08:25:25 -- common/autotest_common.sh@1245 -- # i=3 00:24:52.010 08:25:25 -- common/autotest_common.sh@1246 -- # sleep 0.1 00:24:52.270 08:25:25 -- common/autotest_common.sh@1243 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.270 08:25:25 -- common/autotest_common.sh@1249 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.270 08:25:25 -- common/autotest_common.sh@1253 -- # return 0 00:24:52.270 08:25:25 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 Malloc1 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 Asynchronous Event Request test 00:24:52.270 Attaching to 10.0.0.2 00:24:52.270 Attached to 10.0.0.2 00:24:52.270 Registering asynchronous event callbacks... 00:24:52.270 Starting namespace attribute notice tests for all controllers... 00:24:52.270 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:52.270 aer_cb - Changed Namespace 00:24:52.270 Cleaning up... 00:24:52.270 [ 00:24:52.270 { 00:24:52.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:52.270 "subtype": "Discovery", 00:24:52.270 "listen_addresses": [], 00:24:52.270 "allow_any_host": true, 00:24:52.270 "hosts": [] 00:24:52.270 }, 00:24:52.270 { 00:24:52.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.270 "subtype": "NVMe", 00:24:52.270 "listen_addresses": [ 00:24:52.270 { 00:24:52.270 "transport": "TCP", 00:24:52.270 "trtype": "TCP", 00:24:52.270 "adrfam": "IPv4", 00:24:52.270 "traddr": "10.0.0.2", 00:24:52.270 "trsvcid": "4420" 00:24:52.270 } 00:24:52.270 ], 00:24:52.270 "allow_any_host": true, 00:24:52.270 "hosts": [], 00:24:52.270 "serial_number": "SPDK00000000000001", 00:24:52.270 "model_number": "SPDK bdev Controller", 00:24:52.270 "max_namespaces": 2, 00:24:52.270 "min_cntlid": 1, 00:24:52.270 "max_cntlid": 65519, 00:24:52.270 "namespaces": [ 00:24:52.270 { 00:24:52.270 "nsid": 1, 00:24:52.270 "bdev_name": "Malloc0", 00:24:52.270 "name": "Malloc0", 00:24:52.270 "nguid": "4D85B5AD64DE4B5C9D42D7FD7E1A9874", 00:24:52.270 "uuid": "4d85b5ad-64de-4b5c-9d42-d7fd7e1a9874" 00:24:52.270 }, 00:24:52.270 { 00:24:52.270 "nsid": 2, 00:24:52.270 "bdev_name": "Malloc1", 00:24:52.270 "name": "Malloc1", 00:24:52.270 "nguid": "87DF4B2878F74218BF42B76B93D5DD2C", 00:24:52.270 "uuid": "87df4b28-78f7-4218-bf42-b76b93d5dd2c" 00:24:52.270 } 00:24:52.270 ] 00:24:52.270 } 00:24:52.270 ] 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@43 -- # wait 2368775 00:24:52.270 08:25:25 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.270 08:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.270 08:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.270 08:25:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.270 08:25:25 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:52.270 08:25:25 -- host/aer.sh@51 -- # nvmftestfini 00:24:52.270 08:25:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:52.271 08:25:25 -- nvmf/common.sh@116 -- # sync 00:24:52.271 08:25:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:52.271 08:25:25 -- nvmf/common.sh@119 -- # set +e 00:24:52.271 08:25:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:52.271 08:25:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:52.271 rmmod nvme_tcp 00:24:52.271 rmmod nvme_fabrics 00:24:52.271 rmmod nvme_keyring 00:24:52.271 08:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:52.271 08:25:25 -- nvmf/common.sh@123 -- # set -e 00:24:52.271 08:25:25 -- nvmf/common.sh@124 -- # return 0 00:24:52.271 08:25:25 -- nvmf/common.sh@477 -- # '[' -n 2368665 ']' 00:24:52.271 08:25:25 -- nvmf/common.sh@478 -- # killprocess 2368665 00:24:52.271 08:25:25 -- common/autotest_common.sh@924 -- # '[' -z 2368665 ']' 00:24:52.271 08:25:25 -- common/autotest_common.sh@928 -- # kill -0 2368665 00:24:52.271 08:25:25 -- common/autotest_common.sh@929 -- # uname 00:24:52.271 08:25:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:52.271 08:25:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2368665 00:24:52.530 08:25:25 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:52.530 08:25:25 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:52.530 08:25:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2368665' 00:24:52.530 killing process with pid 2368665 00:24:52.530 08:25:25 -- common/autotest_common.sh@943 -- # kill 2368665 00:24:52.530 [2024-02-13 08:25:25.967918] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:52.530 08:25:25 -- common/autotest_common.sh@948 -- # wait 2368665 00:24:52.531 08:25:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:52.531 08:25:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:52.531 08:25:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:52.531 08:25:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.531 08:25:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:52.531 08:25:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.531 08:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.531 08:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.070 08:25:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:55.070 00:24:55.070 real 0m9.784s 00:24:55.070 user 0m7.756s 00:24:55.070 sys 0m4.811s 00:24:55.070 08:25:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:55.070 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:24:55.070 ************************************ 00:24:55.070 END TEST nvmf_aer 00:24:55.070 ************************************ 00:24:55.070 08:25:28 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:55.070 08:25:28 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:24:55.070 08:25:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:55.070 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:24:55.070 ************************************ 00:24:55.070 START TEST nvmf_async_init 00:24:55.070 ************************************ 00:24:55.070 08:25:28 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:55.070 * Looking for test storage... 00:24:55.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.070 08:25:28 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.070 08:25:28 -- nvmf/common.sh@7 -- # uname -s 00:24:55.070 08:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.070 08:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.070 08:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.070 08:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.070 08:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.070 08:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.070 08:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.070 08:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.070 08:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.070 08:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.070 08:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:55.070 08:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:55.070 08:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.070 08:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.070 08:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.070 08:25:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.070 08:25:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.070 08:25:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.070 08:25:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.070 08:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.070 08:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.070 08:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.070 08:25:28 -- paths/export.sh@5 -- # export PATH 00:24:55.070 08:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.070 08:25:28 -- nvmf/common.sh@46 -- # : 0 00:24:55.070 08:25:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.070 08:25:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.070 08:25:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.070 08:25:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.070 08:25:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.070 08:25:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.070 08:25:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.070 08:25:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.070 08:25:28 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:55.070 08:25:28 -- host/async_init.sh@14 -- # null_block_size=512 00:24:55.070 08:25:28 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:55.070 08:25:28 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:55.070 08:25:28 -- host/async_init.sh@20 -- # uuidgen 00:24:55.070 08:25:28 -- host/async_init.sh@20 -- # tr -d - 00:24:55.070 08:25:28 -- host/async_init.sh@20 -- # nguid=3c399fb5616f4586b5ec16fe94b08347 00:24:55.071 08:25:28 -- host/async_init.sh@22 -- # nvmftestinit 00:24:55.071 08:25:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:55.071 08:25:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.071 08:25:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:55.071 08:25:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:55.071 08:25:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:55.071 08:25:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.071 08:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.071 08:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.071 08:25:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:55.071 08:25:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:55.071 08:25:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:55.071 08:25:28 -- common/autotest_common.sh@10 -- # set +x 00:25:00.348 08:25:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:00.348 08:25:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:00.348 08:25:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:00.348 08:25:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:00.348 08:25:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:00.348 08:25:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:00.348 08:25:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:00.348 08:25:33 -- nvmf/common.sh@294 -- # net_devs=() 00:25:00.348 08:25:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:00.348 08:25:33 -- nvmf/common.sh@295 -- # e810=() 00:25:00.348 08:25:33 -- nvmf/common.sh@295 -- # local -ga e810 00:25:00.348 08:25:33 -- nvmf/common.sh@296 -- # x722=() 00:25:00.348 08:25:33 -- nvmf/common.sh@296 -- # local -ga x722 00:25:00.348 08:25:33 -- nvmf/common.sh@297 -- # mlx=() 00:25:00.348 08:25:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:00.348 08:25:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.348 08:25:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:00.348 08:25:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:00.348 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:00.348 08:25:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:00.348 08:25:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:00.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:00.348 08:25:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:00.348 08:25:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.348 08:25:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.348 08:25:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:00.348 Found net devices under 0000:af:00.0: cvl_0_0 00:25:00.348 08:25:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:00.348 08:25:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.348 08:25:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.348 08:25:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:00.348 Found net devices under 0000:af:00.1: cvl_0_1 00:25:00.348 08:25:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:00.348 08:25:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:00.348 08:25:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.348 08:25:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.348 08:25:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:00.348 08:25:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.348 08:25:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.348 08:25:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:00.348 08:25:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.348 08:25:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.348 08:25:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:00.348 08:25:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:00.348 08:25:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.348 08:25:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.348 08:25:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.348 08:25:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.348 08:25:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:00.348 08:25:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.348 08:25:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.348 08:25:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.348 08:25:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:00.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:00.348 00:25:00.348 --- 10.0.0.2 ping statistics --- 00:25:00.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.348 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:00.348 08:25:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:25:00.348 00:25:00.348 --- 10.0.0.1 ping statistics --- 00:25:00.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.348 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:25:00.348 08:25:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.348 08:25:33 -- nvmf/common.sh@410 -- # return 0 00:25:00.348 08:25:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:00.348 08:25:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.348 08:25:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:00.348 08:25:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.348 08:25:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:00.348 08:25:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:00.348 08:25:33 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:00.348 08:25:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:00.348 08:25:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:00.348 08:25:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.348 08:25:33 -- nvmf/common.sh@469 -- # nvmfpid=2372642 00:25:00.348 08:25:33 -- nvmf/common.sh@470 -- # waitforlisten 2372642 00:25:00.348 08:25:33 -- common/autotest_common.sh@817 -- # '[' -z 2372642 ']' 00:25:00.348 08:25:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.348 08:25:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.348 08:25:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.348 08:25:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.348 08:25:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.348 08:25:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:00.349 [2024-02-13 08:25:33.977179] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:00.349 [2024-02-13 08:25:33.977228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.349 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.608 [2024-02-13 08:25:34.038948] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.608 [2024-02-13 08:25:34.114576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:00.608 [2024-02-13 08:25:34.114688] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.608 [2024-02-13 08:25:34.114697] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.608 [2024-02-13 08:25:34.114703] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.608 [2024-02-13 08:25:34.114720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.178 08:25:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.178 08:25:34 -- common/autotest_common.sh@850 -- # return 0 00:25:01.178 08:25:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:01.178 08:25:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 08:25:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.178 08:25:34 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 [2024-02-13 08:25:34.788467] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 null0 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3c399fb5616f4586b5ec16fe94b08347 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 [2024-02-13 08:25:34.828671] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.178 08:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.178 08:25:34 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:01.178 08:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.178 08:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:01.438 nvme0n1 00:25:01.438 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.438 08:25:35 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:01.438 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.438 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.438 [ 00:25:01.438 { 00:25:01.438 "name": "nvme0n1", 00:25:01.438 "aliases": [ 00:25:01.438 "3c399fb5-616f-4586-b5ec-16fe94b08347" 00:25:01.438 ], 00:25:01.438 "product_name": "NVMe disk", 00:25:01.438 "block_size": 512, 00:25:01.438 "num_blocks": 2097152, 00:25:01.438 "uuid": "3c399fb5-616f-4586-b5ec-16fe94b08347", 00:25:01.438 "assigned_rate_limits": { 00:25:01.438 "rw_ios_per_sec": 0, 00:25:01.438 "rw_mbytes_per_sec": 0, 00:25:01.438 "r_mbytes_per_sec": 0, 00:25:01.438 "w_mbytes_per_sec": 0 00:25:01.438 }, 00:25:01.438 "claimed": false, 00:25:01.438 "zoned": false, 00:25:01.438 "supported_io_types": { 00:25:01.438 "read": true, 00:25:01.438 "write": true, 00:25:01.438 "unmap": false, 00:25:01.438 "write_zeroes": true, 00:25:01.438 "flush": true, 00:25:01.438 "reset": true, 00:25:01.438 "compare": true, 00:25:01.438 "compare_and_write": true, 00:25:01.438 "abort": true, 00:25:01.438 "nvme_admin": true, 00:25:01.438 "nvme_io": true 00:25:01.438 }, 00:25:01.438 "driver_specific": { 00:25:01.438 "nvme": [ 00:25:01.438 { 00:25:01.438 "trid": { 00:25:01.438 "trtype": "TCP", 00:25:01.438 "adrfam": "IPv4", 00:25:01.438 "traddr": "10.0.0.2", 00:25:01.438 "trsvcid": "4420", 00:25:01.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:01.438 }, 00:25:01.438 "ctrlr_data": { 00:25:01.438 "cntlid": 1, 00:25:01.438 "vendor_id": "0x8086", 00:25:01.438 "model_number": "SPDK bdev Controller", 00:25:01.438 "serial_number": "00000000000000000000", 00:25:01.438 "firmware_revision": "24.05", 00:25:01.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.438 "oacs": { 00:25:01.438 "security": 0, 00:25:01.438 "format": 0, 00:25:01.438 "firmware": 0, 00:25:01.438 "ns_manage": 0 00:25:01.438 }, 00:25:01.438 "multi_ctrlr": true, 00:25:01.438 "ana_reporting": false 00:25:01.438 }, 00:25:01.438 "vs": { 00:25:01.438 "nvme_version": "1.3" 00:25:01.438 }, 00:25:01.438 "ns_data": { 00:25:01.438 "id": 1, 00:25:01.438 "can_share": true 00:25:01.438 } 00:25:01.438 } 00:25:01.438 ], 00:25:01.438 "mp_policy": "active_passive" 00:25:01.438 } 00:25:01.438 } 00:25:01.438 ] 00:25:01.438 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.438 08:25:35 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:01.438 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.438 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.438 [2024-02-13 08:25:35.073183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:01.438 [2024-02-13 08:25:35.073249] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b5df0 (9): Bad file descriptor 00:25:01.699 [2024-02-13 08:25:35.204736] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 [ 00:25:01.699 { 00:25:01.699 "name": "nvme0n1", 00:25:01.699 "aliases": [ 00:25:01.699 "3c399fb5-616f-4586-b5ec-16fe94b08347" 00:25:01.699 ], 00:25:01.699 "product_name": "NVMe disk", 00:25:01.699 "block_size": 512, 00:25:01.699 "num_blocks": 2097152, 00:25:01.699 "uuid": "3c399fb5-616f-4586-b5ec-16fe94b08347", 00:25:01.699 "assigned_rate_limits": { 00:25:01.699 "rw_ios_per_sec": 0, 00:25:01.699 "rw_mbytes_per_sec": 0, 00:25:01.699 "r_mbytes_per_sec": 0, 00:25:01.699 "w_mbytes_per_sec": 0 00:25:01.699 }, 00:25:01.699 "claimed": false, 00:25:01.699 "zoned": false, 00:25:01.699 "supported_io_types": { 00:25:01.699 "read": true, 00:25:01.699 "write": true, 00:25:01.699 "unmap": false, 00:25:01.699 "write_zeroes": true, 00:25:01.699 "flush": true, 00:25:01.699 "reset": true, 00:25:01.699 "compare": true, 00:25:01.699 "compare_and_write": true, 00:25:01.699 "abort": true, 00:25:01.699 "nvme_admin": true, 00:25:01.699 "nvme_io": true 00:25:01.699 }, 00:25:01.699 "driver_specific": { 00:25:01.699 "nvme": [ 00:25:01.699 { 00:25:01.699 "trid": { 00:25:01.699 "trtype": "TCP", 00:25:01.699 "adrfam": "IPv4", 00:25:01.699 "traddr": "10.0.0.2", 00:25:01.699 "trsvcid": "4420", 00:25:01.699 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:01.699 }, 00:25:01.699 "ctrlr_data": { 00:25:01.699 "cntlid": 2, 00:25:01.699 "vendor_id": "0x8086", 00:25:01.699 "model_number": "SPDK bdev Controller", 00:25:01.699 "serial_number": "00000000000000000000", 00:25:01.699 "firmware_revision": "24.05", 00:25:01.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.699 "oacs": { 00:25:01.699 "security": 0, 00:25:01.699 "format": 0, 00:25:01.699 "firmware": 0, 00:25:01.699 "ns_manage": 0 00:25:01.699 }, 00:25:01.699 "multi_ctrlr": true, 00:25:01.699 "ana_reporting": false 00:25:01.699 }, 00:25:01.699 "vs": { 00:25:01.699 "nvme_version": "1.3" 00:25:01.699 }, 00:25:01.699 "ns_data": { 00:25:01.699 "id": 1, 00:25:01.699 "can_share": true 00:25:01.699 } 00:25:01.699 } 00:25:01.699 ], 00:25:01.699 "mp_policy": "active_passive" 00:25:01.699 } 00:25:01.699 } 00:25:01.699 ] 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@53 -- # mktemp 00:25:01.699 08:25:35 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tPGtnQFmwQ 00:25:01.699 08:25:35 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:01.699 08:25:35 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tPGtnQFmwQ 00:25:01.699 08:25:35 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 [2024-02-13 08:25:35.253743] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.699 [2024-02-13 08:25:35.253850] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tPGtnQFmwQ 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tPGtnQFmwQ 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.699 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.699 [2024-02-13 08:25:35.269783] bdev_nvme_rpc.c: 478:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.699 nvme0n1 00:25:01.699 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.699 08:25:35 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:01.699 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.700 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.700 [ 00:25:01.700 { 00:25:01.700 "name": "nvme0n1", 00:25:01.700 "aliases": [ 00:25:01.700 "3c399fb5-616f-4586-b5ec-16fe94b08347" 00:25:01.700 ], 00:25:01.700 "product_name": "NVMe disk", 00:25:01.700 "block_size": 512, 00:25:01.700 "num_blocks": 2097152, 00:25:01.700 "uuid": "3c399fb5-616f-4586-b5ec-16fe94b08347", 00:25:01.700 "assigned_rate_limits": { 00:25:01.700 "rw_ios_per_sec": 0, 00:25:01.700 "rw_mbytes_per_sec": 0, 00:25:01.700 "r_mbytes_per_sec": 0, 00:25:01.700 "w_mbytes_per_sec": 0 00:25:01.700 }, 00:25:01.700 "claimed": false, 00:25:01.700 "zoned": false, 00:25:01.700 "supported_io_types": { 00:25:01.700 "read": true, 00:25:01.700 "write": true, 00:25:01.700 "unmap": false, 00:25:01.700 "write_zeroes": true, 00:25:01.700 "flush": true, 00:25:01.700 "reset": true, 00:25:01.700 "compare": true, 00:25:01.700 "compare_and_write": true, 00:25:01.700 "abort": true, 00:25:01.700 "nvme_admin": true, 00:25:01.700 "nvme_io": true 00:25:01.700 }, 00:25:01.700 "driver_specific": { 00:25:01.700 "nvme": [ 00:25:01.700 { 00:25:01.700 "trid": { 00:25:01.700 "trtype": "TCP", 00:25:01.700 "adrfam": "IPv4", 00:25:01.700 "traddr": "10.0.0.2", 00:25:01.700 "trsvcid": "4421", 00:25:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:01.700 }, 00:25:01.700 "ctrlr_data": { 00:25:01.700 "cntlid": 3, 00:25:01.700 "vendor_id": "0x8086", 00:25:01.700 "model_number": "SPDK bdev Controller", 00:25:01.700 "serial_number": "00000000000000000000", 00:25:01.700 "firmware_revision": "24.05", 00:25:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.700 "oacs": { 00:25:01.700 "security": 0, 00:25:01.700 "format": 0, 00:25:01.700 "firmware": 0, 00:25:01.700 "ns_manage": 0 00:25:01.700 }, 00:25:01.700 "multi_ctrlr": true, 00:25:01.700 "ana_reporting": false 00:25:01.700 }, 00:25:01.700 "vs": { 00:25:01.700 "nvme_version": "1.3" 00:25:01.700 }, 00:25:01.700 "ns_data": { 00:25:01.700 "id": 1, 00:25:01.700 "can_share": true 00:25:01.700 } 00:25:01.700 } 00:25:01.700 ], 00:25:01.700 "mp_policy": "active_passive" 00:25:01.700 } 00:25:01.700 } 00:25:01.700 ] 00:25:01.700 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.700 08:25:35 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.700 08:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.700 08:25:35 -- common/autotest_common.sh@10 -- # set +x 00:25:01.700 08:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.700 08:25:35 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.tPGtnQFmwQ 00:25:01.700 08:25:35 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:01.700 08:25:35 -- host/async_init.sh@78 -- # nvmftestfini 00:25:01.700 08:25:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:01.700 08:25:35 -- nvmf/common.sh@116 -- # sync 00:25:01.700 08:25:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:01.700 08:25:35 -- nvmf/common.sh@119 -- # set +e 00:25:01.700 08:25:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:01.700 08:25:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:01.700 rmmod nvme_tcp 00:25:01.700 rmmod nvme_fabrics 00:25:01.959 rmmod nvme_keyring 00:25:01.959 08:25:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:01.959 08:25:35 -- nvmf/common.sh@123 -- # set -e 00:25:01.959 08:25:35 -- nvmf/common.sh@124 -- # return 0 00:25:01.959 08:25:35 -- nvmf/common.sh@477 -- # '[' -n 2372642 ']' 00:25:01.959 08:25:35 -- nvmf/common.sh@478 -- # killprocess 2372642 00:25:01.959 08:25:35 -- common/autotest_common.sh@924 -- # '[' -z 2372642 ']' 00:25:01.959 08:25:35 -- common/autotest_common.sh@928 -- # kill -0 2372642 00:25:01.959 08:25:35 -- common/autotest_common.sh@929 -- # uname 00:25:01.959 08:25:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:01.959 08:25:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2372642 00:25:01.959 08:25:35 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:01.959 08:25:35 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:01.959 08:25:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2372642' 00:25:01.959 killing process with pid 2372642 00:25:01.959 08:25:35 -- common/autotest_common.sh@943 -- # kill 2372642 00:25:01.959 08:25:35 -- common/autotest_common.sh@948 -- # wait 2372642 00:25:01.959 08:25:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:01.959 08:25:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:01.959 08:25:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:01.959 08:25:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.959 08:25:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:01.959 08:25:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.959 08:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.959 08:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.498 08:25:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:04.498 00:25:04.498 real 0m9.421s 00:25:04.498 user 0m3.353s 00:25:04.498 sys 0m4.378s 00:25:04.498 08:25:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:04.498 08:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 ************************************ 00:25:04.498 END TEST nvmf_async_init 00:25:04.498 ************************************ 00:25:04.498 08:25:37 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:04.498 08:25:37 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:04.498 08:25:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:04.498 08:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 ************************************ 00:25:04.498 START TEST dma 00:25:04.498 ************************************ 00:25:04.498 08:25:37 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:04.498 * Looking for test storage... 00:25:04.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.498 08:25:37 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.498 08:25:37 -- nvmf/common.sh@7 -- # uname -s 00:25:04.498 08:25:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.498 08:25:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.498 08:25:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.498 08:25:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.498 08:25:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.498 08:25:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.498 08:25:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.498 08:25:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.498 08:25:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.498 08:25:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.498 08:25:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:04.498 08:25:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:04.498 08:25:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.498 08:25:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.498 08:25:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.498 08:25:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.498 08:25:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.498 08:25:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.498 08:25:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.498 08:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.498 08:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.498 08:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.498 08:25:37 -- paths/export.sh@5 -- # export PATH 00:25:04.498 08:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.498 08:25:37 -- nvmf/common.sh@46 -- # : 0 00:25:04.498 08:25:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:04.498 08:25:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:04.498 08:25:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:04.498 08:25:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.498 08:25:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.498 08:25:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:04.498 08:25:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:04.498 08:25:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:04.498 08:25:37 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:04.498 08:25:37 -- host/dma.sh@13 -- # exit 0 00:25:04.498 00:25:04.498 real 0m0.101s 00:25:04.498 user 0m0.048s 00:25:04.498 sys 0m0.061s 00:25:04.498 08:25:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:04.498 08:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 ************************************ 00:25:04.498 END TEST dma 00:25:04.498 ************************************ 00:25:04.499 08:25:37 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:04.499 08:25:37 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:04.499 08:25:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:04.499 08:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.499 ************************************ 00:25:04.499 START TEST nvmf_identify 00:25:04.499 ************************************ 00:25:04.499 08:25:37 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:04.499 * Looking for test storage... 00:25:04.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.499 08:25:37 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.499 08:25:37 -- nvmf/common.sh@7 -- # uname -s 00:25:04.499 08:25:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.499 08:25:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.499 08:25:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.499 08:25:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.499 08:25:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.499 08:25:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.499 08:25:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.499 08:25:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.499 08:25:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.499 08:25:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.499 08:25:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:04.499 08:25:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:04.499 08:25:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.499 08:25:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.499 08:25:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.499 08:25:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.499 08:25:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.499 08:25:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.499 08:25:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.499 08:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.499 08:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.499 08:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.499 08:25:37 -- paths/export.sh@5 -- # export PATH 00:25:04.499 08:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.499 08:25:37 -- nvmf/common.sh@46 -- # : 0 00:25:04.499 08:25:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:04.499 08:25:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:04.499 08:25:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:04.499 08:25:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.499 08:25:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.499 08:25:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:04.499 08:25:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:04.499 08:25:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:04.499 08:25:38 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:04.499 08:25:38 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:04.499 08:25:38 -- host/identify.sh@14 -- # nvmftestinit 00:25:04.499 08:25:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:04.499 08:25:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.499 08:25:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:04.499 08:25:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:04.499 08:25:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:04.499 08:25:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.499 08:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.499 08:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.499 08:25:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:04.499 08:25:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:04.499 08:25:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:04.499 08:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:11.077 08:25:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.077 08:25:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.077 08:25:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.077 08:25:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.077 08:25:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.077 08:25:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.077 08:25:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.077 08:25:43 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.077 08:25:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.077 08:25:43 -- nvmf/common.sh@295 -- # e810=() 00:25:11.077 08:25:43 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.077 08:25:43 -- nvmf/common.sh@296 -- # x722=() 00:25:11.077 08:25:43 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.077 08:25:43 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.077 08:25:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.077 08:25:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.077 08:25:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.077 08:25:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.077 08:25:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.077 08:25:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.077 08:25:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:11.077 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:11.077 08:25:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.077 08:25:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:11.077 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:11.077 08:25:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.077 08:25:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.077 08:25:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.077 08:25:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.077 08:25:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.077 08:25:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.077 08:25:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:11.077 Found net devices under 0000:af:00.0: cvl_0_0 00:25:11.077 08:25:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.077 08:25:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.077 08:25:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.078 08:25:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.078 08:25:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.078 08:25:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:11.078 Found net devices under 0000:af:00.1: cvl_0_1 00:25:11.078 08:25:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.078 08:25:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.078 08:25:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.078 08:25:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.078 08:25:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.078 08:25:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.078 08:25:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.078 08:25:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.078 08:25:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.078 08:25:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.078 08:25:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.078 08:25:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.078 08:25:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.078 08:25:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.078 08:25:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.078 08:25:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.078 08:25:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.078 08:25:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.078 08:25:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.078 08:25:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.078 08:25:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.078 08:25:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.078 08:25:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.078 08:25:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.078 08:25:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.078 08:25:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:25:11.078 00:25:11.078 --- 10.0.0.2 ping statistics --- 00:25:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.078 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:11.078 08:25:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:11.078 00:25:11.078 --- 10.0.0.1 ping statistics --- 00:25:11.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.078 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:11.078 08:25:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.078 08:25:44 -- nvmf/common.sh@410 -- # return 0 00:25:11.078 08:25:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.078 08:25:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.078 08:25:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.078 08:25:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.078 08:25:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.078 08:25:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.078 08:25:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.078 08:25:44 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:11.078 08:25:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:11.078 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.078 08:25:44 -- host/identify.sh@19 -- # nvmfpid=2376799 00:25:11.078 08:25:44 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.078 08:25:44 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:11.078 08:25:44 -- host/identify.sh@23 -- # waitforlisten 2376799 00:25:11.078 08:25:44 -- common/autotest_common.sh@817 -- # '[' -z 2376799 ']' 00:25:11.078 08:25:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.078 08:25:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:11.078 08:25:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.078 08:25:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:11.078 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.078 [2024-02-13 08:25:44.135711] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:11.078 [2024-02-13 08:25:44.135754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.078 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.078 [2024-02-13 08:25:44.199675] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.078 [2024-02-13 08:25:44.270851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.078 [2024-02-13 08:25:44.270965] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.078 [2024-02-13 08:25:44.270973] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.078 [2024-02-13 08:25:44.270979] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.078 [2024-02-13 08:25:44.271027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.078 [2024-02-13 08:25:44.271137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.078 [2024-02-13 08:25:44.271158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.078 [2024-02-13 08:25:44.271160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.338 08:25:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.338 08:25:44 -- common/autotest_common.sh@850 -- # return 0 00:25:11.338 08:25:44 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.338 08:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.338 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 [2024-02-13 08:25:44.936708] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.338 08:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.338 08:25:44 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:11.338 08:25:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:11.338 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 08:25:44 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.338 08:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.338 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 Malloc0 00:25:11.338 08:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.338 08:25:44 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.338 08:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.338 08:25:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.338 08:25:45 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:11.338 08:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.338 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.338 08:25:45 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.338 08:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.338 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:25:11.338 [2024-02-13 08:25:45.020197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.338 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.600 08:25:45 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.600 08:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.600 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:25:11.600 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.600 08:25:45 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:11.600 08:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.600 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:25:11.600 [2024-02-13 08:25:45.036015] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:11.600 [ 00:25:11.600 { 00:25:11.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:11.600 "subtype": "Discovery", 00:25:11.600 "listen_addresses": [ 00:25:11.600 { 00:25:11.600 "transport": "TCP", 00:25:11.600 "trtype": "TCP", 00:25:11.600 "adrfam": "IPv4", 00:25:11.600 "traddr": "10.0.0.2", 00:25:11.600 "trsvcid": "4420" 00:25:11.600 } 00:25:11.600 ], 00:25:11.600 "allow_any_host": true, 00:25:11.600 "hosts": [] 00:25:11.600 }, 00:25:11.600 { 00:25:11.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.600 "subtype": "NVMe", 00:25:11.600 "listen_addresses": [ 00:25:11.600 { 00:25:11.600 "transport": "TCP", 00:25:11.600 "trtype": "TCP", 00:25:11.600 "adrfam": "IPv4", 00:25:11.600 "traddr": "10.0.0.2", 00:25:11.600 "trsvcid": "4420" 00:25:11.600 } 00:25:11.600 ], 00:25:11.600 "allow_any_host": true, 00:25:11.600 "hosts": [], 00:25:11.600 "serial_number": "SPDK00000000000001", 00:25:11.600 "model_number": "SPDK bdev Controller", 00:25:11.600 "max_namespaces": 32, 00:25:11.600 "min_cntlid": 1, 00:25:11.600 "max_cntlid": 65519, 00:25:11.600 "namespaces": [ 00:25:11.600 { 00:25:11.600 "nsid": 1, 00:25:11.600 "bdev_name": "Malloc0", 00:25:11.600 "name": "Malloc0", 00:25:11.600 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:11.600 "eui64": "ABCDEF0123456789", 00:25:11.600 "uuid": "f0157a1b-2050-4530-b842-285c013cf2b5" 00:25:11.600 } 00:25:11.600 ] 00:25:11.600 } 00:25:11.600 ] 00:25:11.600 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.600 08:25:45 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:11.600 [2024-02-13 08:25:45.069482] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:11.600 [2024-02-13 08:25:45.069514] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376912 ] 00:25:11.600 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.600 [2024-02-13 08:25:45.099044] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:11.600 [2024-02-13 08:25:45.099091] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:11.600 [2024-02-13 08:25:45.099096] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:11.600 [2024-02-13 08:25:45.099109] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:11.600 [2024-02-13 08:25:45.099116] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:11.600 [2024-02-13 08:25:45.099539] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:11.600 [2024-02-13 08:25:45.099569] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd70b80 0 00:25:11.600 [2024-02-13 08:25:45.113658] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:11.600 [2024-02-13 08:25:45.113670] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:11.600 [2024-02-13 08:25:45.113674] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:11.600 [2024-02-13 08:25:45.113677] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:11.600 [2024-02-13 08:25:45.113710] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.600 [2024-02-13 08:25:45.113716] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.600 [2024-02-13 08:25:45.113720] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.600 [2024-02-13 08:25:45.113731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:11.600 [2024-02-13 08:25:45.113746] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.600 [2024-02-13 08:25:45.121658] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.600 [2024-02-13 08:25:45.121667] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.600 [2024-02-13 08:25:45.121670] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.600 [2024-02-13 08:25:45.121691] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.600 [2024-02-13 08:25:45.121701] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:11.600 [2024-02-13 08:25:45.121706] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:11.600 [2024-02-13 08:25:45.121713] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:11.600 [2024-02-13 08:25:45.121726] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.600 [2024-02-13 08:25:45.121730] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.600 [2024-02-13 08:25:45.121733] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.600 [2024-02-13 08:25:45.121741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.121754] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.121976] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.121986] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.121989] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.121993] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122002] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:11.601 [2024-02-13 08:25:45.122010] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:11.601 [2024-02-13 08:25:45.122018] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122022] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122025] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.122032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.122045] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.122151] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.122158] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.122161] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122165] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122170] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:11.601 [2024-02-13 08:25:45.122177] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122184] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122187] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122190] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.122197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.122208] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.122326] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.122332] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.122335] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122338] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122343] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122353] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122359] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122362] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.122369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.122379] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.122528] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.122534] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.122537] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122540] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122545] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:11.601 [2024-02-13 08:25:45.122549] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122556] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122661] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:11.601 [2024-02-13 08:25:45.122665] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122674] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122677] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122680] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.122686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.122698] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.122807] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.122814] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.122817] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122820] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122824] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:11.601 [2024-02-13 08:25:45.122833] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122837] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122840] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.122845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.122856] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.122963] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.122969] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.122972] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.122976] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.122980] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:11.601 [2024-02-13 08:25:45.122987] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:11.601 [2024-02-13 08:25:45.122994] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:11.601 [2024-02-13 08:25:45.123002] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:11.601 [2024-02-13 08:25:45.123011] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123014] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123017] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.601 [2024-02-13 08:25:45.123024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.601 [2024-02-13 08:25:45.123035] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.601 [2024-02-13 08:25:45.123166] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.601 [2024-02-13 08:25:45.123174] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.601 [2024-02-13 08:25:45.123177] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123181] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd70b80): datao=0, datal=4096, cccid=0 00:25:11.601 [2024-02-13 08:25:45.123185] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd88d0) on tqpair(0xd70b80): expected_datao=0, payload_size=4096 00:25:11.601 [2024-02-13 08:25:45.123192] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123196] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123249] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.601 [2024-02-13 08:25:45.123255] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.601 [2024-02-13 08:25:45.123257] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123261] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.601 [2024-02-13 08:25:45.123268] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:11.601 [2024-02-13 08:25:45.123276] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:11.601 [2024-02-13 08:25:45.123280] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:11.601 [2024-02-13 08:25:45.123284] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:11.601 [2024-02-13 08:25:45.123288] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:11.601 [2024-02-13 08:25:45.123293] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:11.601 [2024-02-13 08:25:45.123301] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:11.601 [2024-02-13 08:25:45.123307] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.601 [2024-02-13 08:25:45.123311] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123314] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.602 [2024-02-13 08:25:45.123332] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.602 [2024-02-13 08:25:45.123477] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.602 [2024-02-13 08:25:45.123483] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.602 [2024-02-13 08:25:45.123486] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123489] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd88d0) on tqpair=0xd70b80 00:25:11.602 [2024-02-13 08:25:45.123496] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123499] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123502] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.602 [2024-02-13 08:25:45.123513] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123516] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123519] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.602 [2024-02-13 08:25:45.123528] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123531] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123534] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.602 [2024-02-13 08:25:45.123543] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123546] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123549] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.602 [2024-02-13 08:25:45.123558] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:11.602 [2024-02-13 08:25:45.123568] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:11.602 [2024-02-13 08:25:45.123574] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123577] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123580] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.602 [2024-02-13 08:25:45.123597] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd88d0, cid 0, qid 0 00:25:11.602 [2024-02-13 08:25:45.123602] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8a30, cid 1, qid 0 00:25:11.602 [2024-02-13 08:25:45.123606] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8b90, cid 2, qid 0 00:25:11.602 [2024-02-13 08:25:45.123610] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.602 [2024-02-13 08:25:45.123613] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8e50, cid 4, qid 0 00:25:11.602 [2024-02-13 08:25:45.123773] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.602 [2024-02-13 08:25:45.123780] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.602 [2024-02-13 08:25:45.123783] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123786] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8e50) on tqpair=0xd70b80 00:25:11.602 [2024-02-13 08:25:45.123794] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:11.602 [2024-02-13 08:25:45.123799] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:11.602 [2024-02-13 08:25:45.123809] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123813] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123816] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.123821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.602 [2024-02-13 08:25:45.123832] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8e50, cid 4, qid 0 00:25:11.602 [2024-02-13 08:25:45.123948] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.602 [2024-02-13 08:25:45.123955] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.602 [2024-02-13 08:25:45.123958] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.123961] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd70b80): datao=0, datal=4096, cccid=4 00:25:11.602 [2024-02-13 08:25:45.123965] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd8e50) on tqpair(0xd70b80): expected_datao=0, payload_size=4096 00:25:11.602 [2024-02-13 08:25:45.124020] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.124024] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168657] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.602 [2024-02-13 08:25:45.168670] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.602 [2024-02-13 08:25:45.168674] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168677] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8e50) on tqpair=0xd70b80 00:25:11.602 [2024-02-13 08:25:45.168691] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:11.602 [2024-02-13 08:25:45.168711] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168715] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168718] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.168724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.602 [2024-02-13 08:25:45.168731] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168734] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168737] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.168742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.602 [2024-02-13 08:25:45.168760] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8e50, cid 4, qid 0 00:25:11.602 [2024-02-13 08:25:45.168765] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8fb0, cid 5, qid 0 00:25:11.602 [2024-02-13 08:25:45.168909] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.602 [2024-02-13 08:25:45.168917] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.602 [2024-02-13 08:25:45.168920] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168923] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd70b80): datao=0, datal=1024, cccid=4 00:25:11.602 [2024-02-13 08:25:45.168927] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd8e50) on tqpair(0xd70b80): expected_datao=0, payload_size=1024 00:25:11.602 [2024-02-13 08:25:45.168939] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168942] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168947] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.602 [2024-02-13 08:25:45.168951] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.602 [2024-02-13 08:25:45.168955] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.168958] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8fb0) on tqpair=0xd70b80 00:25:11.602 [2024-02-13 08:25:45.209761] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.602 [2024-02-13 08:25:45.209776] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.602 [2024-02-13 08:25:45.209779] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.209783] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8e50) on tqpair=0xd70b80 00:25:11.602 [2024-02-13 08:25:45.209795] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.209798] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.602 [2024-02-13 08:25:45.209801] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd70b80) 00:25:11.602 [2024-02-13 08:25:45.209808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.602 [2024-02-13 08:25:45.209825] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8e50, cid 4, qid 0 00:25:11.603 [2024-02-13 08:25:45.209989] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.603 [2024-02-13 08:25:45.209997] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.603 [2024-02-13 08:25:45.210000] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210004] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd70b80): datao=0, datal=3072, cccid=4 00:25:11.603 [2024-02-13 08:25:45.210008] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd8e50) on tqpair(0xd70b80): expected_datao=0, payload_size=3072 00:25:11.603 [2024-02-13 08:25:45.210014] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210017] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210191] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.603 [2024-02-13 08:25:45.210196] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.603 [2024-02-13 08:25:45.210199] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210201] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8e50) on tqpair=0xd70b80 00:25:11.603 [2024-02-13 08:25:45.210210] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210214] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210216] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd70b80) 00:25:11.603 [2024-02-13 08:25:45.210222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.603 [2024-02-13 08:25:45.210237] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8e50, cid 4, qid 0 00:25:11.603 [2024-02-13 08:25:45.210352] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.603 [2024-02-13 08:25:45.210358] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.603 [2024-02-13 08:25:45.210361] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210364] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd70b80): datao=0, datal=8, cccid=4 00:25:11.603 [2024-02-13 08:25:45.210368] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdd8e50) on tqpair(0xd70b80): expected_datao=0, payload_size=8 00:25:11.603 [2024-02-13 08:25:45.210374] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.210380] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.250762] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.603 [2024-02-13 08:25:45.250775] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.603 [2024-02-13 08:25:45.250778] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.603 [2024-02-13 08:25:45.250781] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8e50) on tqpair=0xd70b80 00:25:11.603 ===================================================== 00:25:11.603 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:11.603 ===================================================== 00:25:11.603 Controller Capabilities/Features 00:25:11.603 ================================ 00:25:11.603 Vendor ID: 0000 00:25:11.603 Subsystem Vendor ID: 0000 00:25:11.603 Serial Number: .................... 00:25:11.603 Model Number: ........................................ 00:25:11.603 Firmware Version: 24.05 00:25:11.603 Recommended Arb Burst: 0 00:25:11.603 IEEE OUI Identifier: 00 00 00 00:25:11.603 Multi-path I/O 00:25:11.603 May have multiple subsystem ports: No 00:25:11.603 May have multiple controllers: No 00:25:11.603 Associated with SR-IOV VF: No 00:25:11.603 Max Data Transfer Size: 131072 00:25:11.603 Max Number of Namespaces: 0 00:25:11.603 Max Number of I/O Queues: 1024 00:25:11.603 NVMe Specification Version (VS): 1.3 00:25:11.603 NVMe Specification Version (Identify): 1.3 00:25:11.603 Maximum Queue Entries: 128 00:25:11.603 Contiguous Queues Required: Yes 00:25:11.603 Arbitration Mechanisms Supported 00:25:11.603 Weighted Round Robin: Not Supported 00:25:11.603 Vendor Specific: Not Supported 00:25:11.603 Reset Timeout: 15000 ms 00:25:11.603 Doorbell Stride: 4 bytes 00:25:11.603 NVM Subsystem Reset: Not Supported 00:25:11.603 Command Sets Supported 00:25:11.603 NVM Command Set: Supported 00:25:11.603 Boot Partition: Not Supported 00:25:11.603 Memory Page Size Minimum: 4096 bytes 00:25:11.603 Memory Page Size Maximum: 4096 bytes 00:25:11.603 Persistent Memory Region: Not Supported 00:25:11.603 Optional Asynchronous Events Supported 00:25:11.603 Namespace Attribute Notices: Not Supported 00:25:11.603 Firmware Activation Notices: Not Supported 00:25:11.603 ANA Change Notices: Not Supported 00:25:11.603 PLE Aggregate Log Change Notices: Not Supported 00:25:11.603 LBA Status Info Alert Notices: Not Supported 00:25:11.603 EGE Aggregate Log Change Notices: Not Supported 00:25:11.603 Normal NVM Subsystem Shutdown event: Not Supported 00:25:11.603 Zone Descriptor Change Notices: Not Supported 00:25:11.603 Discovery Log Change Notices: Supported 00:25:11.603 Controller Attributes 00:25:11.603 128-bit Host Identifier: Not Supported 00:25:11.603 Non-Operational Permissive Mode: Not Supported 00:25:11.603 NVM Sets: Not Supported 00:25:11.603 Read Recovery Levels: Not Supported 00:25:11.603 Endurance Groups: Not Supported 00:25:11.603 Predictable Latency Mode: Not Supported 00:25:11.603 Traffic Based Keep ALive: Not Supported 00:25:11.603 Namespace Granularity: Not Supported 00:25:11.603 SQ Associations: Not Supported 00:25:11.603 UUID List: Not Supported 00:25:11.603 Multi-Domain Subsystem: Not Supported 00:25:11.603 Fixed Capacity Management: Not Supported 00:25:11.603 Variable Capacity Management: Not Supported 00:25:11.603 Delete Endurance Group: Not Supported 00:25:11.603 Delete NVM Set: Not Supported 00:25:11.603 Extended LBA Formats Supported: Not Supported 00:25:11.603 Flexible Data Placement Supported: Not Supported 00:25:11.603 00:25:11.603 Controller Memory Buffer Support 00:25:11.603 ================================ 00:25:11.603 Supported: No 00:25:11.603 00:25:11.603 Persistent Memory Region Support 00:25:11.603 ================================ 00:25:11.603 Supported: No 00:25:11.603 00:25:11.603 Admin Command Set Attributes 00:25:11.603 ============================ 00:25:11.603 Security Send/Receive: Not Supported 00:25:11.603 Format NVM: Not Supported 00:25:11.603 Firmware Activate/Download: Not Supported 00:25:11.603 Namespace Management: Not Supported 00:25:11.603 Device Self-Test: Not Supported 00:25:11.603 Directives: Not Supported 00:25:11.603 NVMe-MI: Not Supported 00:25:11.603 Virtualization Management: Not Supported 00:25:11.603 Doorbell Buffer Config: Not Supported 00:25:11.603 Get LBA Status Capability: Not Supported 00:25:11.603 Command & Feature Lockdown Capability: Not Supported 00:25:11.603 Abort Command Limit: 1 00:25:11.603 Async Event Request Limit: 4 00:25:11.603 Number of Firmware Slots: N/A 00:25:11.603 Firmware Slot 1 Read-Only: N/A 00:25:11.603 Firmware Activation Without Reset: N/A 00:25:11.603 Multiple Update Detection Support: N/A 00:25:11.603 Firmware Update Granularity: No Information Provided 00:25:11.603 Per-Namespace SMART Log: No 00:25:11.603 Asymmetric Namespace Access Log Page: Not Supported 00:25:11.603 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:11.603 Command Effects Log Page: Not Supported 00:25:11.603 Get Log Page Extended Data: Supported 00:25:11.603 Telemetry Log Pages: Not Supported 00:25:11.603 Persistent Event Log Pages: Not Supported 00:25:11.603 Supported Log Pages Log Page: May Support 00:25:11.603 Commands Supported & Effects Log Page: Not Supported 00:25:11.603 Feature Identifiers & Effects Log Page:May Support 00:25:11.603 NVMe-MI Commands & Effects Log Page: May Support 00:25:11.603 Data Area 4 for Telemetry Log: Not Supported 00:25:11.603 Error Log Page Entries Supported: 128 00:25:11.603 Keep Alive: Not Supported 00:25:11.603 00:25:11.603 NVM Command Set Attributes 00:25:11.603 ========================== 00:25:11.604 Submission Queue Entry Size 00:25:11.604 Max: 1 00:25:11.604 Min: 1 00:25:11.604 Completion Queue Entry Size 00:25:11.604 Max: 1 00:25:11.604 Min: 1 00:25:11.604 Number of Namespaces: 0 00:25:11.604 Compare Command: Not Supported 00:25:11.604 Write Uncorrectable Command: Not Supported 00:25:11.604 Dataset Management Command: Not Supported 00:25:11.604 Write Zeroes Command: Not Supported 00:25:11.604 Set Features Save Field: Not Supported 00:25:11.604 Reservations: Not Supported 00:25:11.604 Timestamp: Not Supported 00:25:11.604 Copy: Not Supported 00:25:11.604 Volatile Write Cache: Not Present 00:25:11.604 Atomic Write Unit (Normal): 1 00:25:11.604 Atomic Write Unit (PFail): 1 00:25:11.604 Atomic Compare & Write Unit: 1 00:25:11.604 Fused Compare & Write: Supported 00:25:11.604 Scatter-Gather List 00:25:11.604 SGL Command Set: Supported 00:25:11.604 SGL Keyed: Supported 00:25:11.604 SGL Bit Bucket Descriptor: Not Supported 00:25:11.604 SGL Metadata Pointer: Not Supported 00:25:11.604 Oversized SGL: Not Supported 00:25:11.604 SGL Metadata Address: Not Supported 00:25:11.604 SGL Offset: Supported 00:25:11.604 Transport SGL Data Block: Not Supported 00:25:11.604 Replay Protected Memory Block: Not Supported 00:25:11.604 00:25:11.604 Firmware Slot Information 00:25:11.604 ========================= 00:25:11.604 Active slot: 0 00:25:11.604 00:25:11.604 00:25:11.604 Error Log 00:25:11.604 ========= 00:25:11.604 00:25:11.604 Active Namespaces 00:25:11.604 ================= 00:25:11.604 Discovery Log Page 00:25:11.604 ================== 00:25:11.604 Generation Counter: 2 00:25:11.604 Number of Records: 2 00:25:11.604 Record Format: 0 00:25:11.604 00:25:11.604 Discovery Log Entry 0 00:25:11.604 ---------------------- 00:25:11.604 Transport Type: 3 (TCP) 00:25:11.604 Address Family: 1 (IPv4) 00:25:11.604 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:11.604 Entry Flags: 00:25:11.604 Duplicate Returned Information: 1 00:25:11.604 Explicit Persistent Connection Support for Discovery: 1 00:25:11.604 Transport Requirements: 00:25:11.604 Secure Channel: Not Required 00:25:11.604 Port ID: 0 (0x0000) 00:25:11.604 Controller ID: 65535 (0xffff) 00:25:11.604 Admin Max SQ Size: 128 00:25:11.604 Transport Service Identifier: 4420 00:25:11.604 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:11.604 Transport Address: 10.0.0.2 00:25:11.604 Discovery Log Entry 1 00:25:11.604 ---------------------- 00:25:11.604 Transport Type: 3 (TCP) 00:25:11.604 Address Family: 1 (IPv4) 00:25:11.604 Subsystem Type: 2 (NVM Subsystem) 00:25:11.604 Entry Flags: 00:25:11.604 Duplicate Returned Information: 0 00:25:11.604 Explicit Persistent Connection Support for Discovery: 0 00:25:11.604 Transport Requirements: 00:25:11.604 Secure Channel: Not Required 00:25:11.604 Port ID: 0 (0x0000) 00:25:11.604 Controller ID: 65535 (0xffff) 00:25:11.604 Admin Max SQ Size: 128 00:25:11.604 Transport Service Identifier: 4420 00:25:11.604 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:11.604 Transport Address: 10.0.0.2 [2024-02-13 08:25:45.250862] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:11.604 [2024-02-13 08:25:45.250874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.604 [2024-02-13 08:25:45.250880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.604 [2024-02-13 08:25:45.250885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.604 [2024-02-13 08:25:45.250890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.604 [2024-02-13 08:25:45.250900] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.250903] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.250906] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.604 [2024-02-13 08:25:45.250913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.604 [2024-02-13 08:25:45.250926] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.604 [2024-02-13 08:25:45.251029] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.604 [2024-02-13 08:25:45.251036] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.604 [2024-02-13 08:25:45.251039] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251043] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.604 [2024-02-13 08:25:45.251050] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251053] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251056] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.604 [2024-02-13 08:25:45.251062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.604 [2024-02-13 08:25:45.251076] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.604 [2024-02-13 08:25:45.251194] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.604 [2024-02-13 08:25:45.251200] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.604 [2024-02-13 08:25:45.251203] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251206] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.604 [2024-02-13 08:25:45.251211] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:11.604 [2024-02-13 08:25:45.251215] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:11.604 [2024-02-13 08:25:45.251224] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251227] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251230] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.604 [2024-02-13 08:25:45.251236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.604 [2024-02-13 08:25:45.251249] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.604 [2024-02-13 08:25:45.251362] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.604 [2024-02-13 08:25:45.251368] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.604 [2024-02-13 08:25:45.251371] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251373] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.604 [2024-02-13 08:25:45.251383] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251387] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251390] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.604 [2024-02-13 08:25:45.251396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.604 [2024-02-13 08:25:45.251406] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.604 [2024-02-13 08:25:45.251513] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.604 [2024-02-13 08:25:45.251519] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.604 [2024-02-13 08:25:45.251522] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251525] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.604 [2024-02-13 08:25:45.251534] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251537] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.251540] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.604 [2024-02-13 08:25:45.251546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.604 [2024-02-13 08:25:45.251557] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.604 [2024-02-13 08:25:45.255656] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.604 [2024-02-13 08:25:45.255668] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.604 [2024-02-13 08:25:45.255671] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.604 [2024-02-13 08:25:45.255674] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.604 [2024-02-13 08:25:45.255685] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.605 [2024-02-13 08:25:45.255688] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.605 [2024-02-13 08:25:45.255691] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd70b80) 00:25:11.605 [2024-02-13 08:25:45.255697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.605 [2024-02-13 08:25:45.255709] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdd8cf0, cid 3, qid 0 00:25:11.605 [2024-02-13 08:25:45.255897] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.605 [2024-02-13 08:25:45.255904] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.605 [2024-02-13 08:25:45.255906] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.605 [2024-02-13 08:25:45.255910] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xdd8cf0) on tqpair=0xd70b80 00:25:11.605 [2024-02-13 08:25:45.255917] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:11.605 00:25:11.605 08:25:45 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:11.868 [2024-02-13 08:25:45.291332] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:11.868 [2024-02-13 08:25:45.291380] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377037 ] 00:25:11.868 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.868 [2024-02-13 08:25:45.319661] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:11.868 [2024-02-13 08:25:45.319704] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:11.868 [2024-02-13 08:25:45.319709] nvme_tcp.c:2246:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:11.868 [2024-02-13 08:25:45.319720] nvme_tcp.c:2264:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:11.868 [2024-02-13 08:25:45.319726] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:11.868 [2024-02-13 08:25:45.320196] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:11.868 [2024-02-13 08:25:45.320216] nvme_tcp.c:1485:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2089b80 0 00:25:11.868 [2024-02-13 08:25:45.326660] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:11.868 [2024-02-13 08:25:45.326674] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:11.868 [2024-02-13 08:25:45.326677] nvme_tcp.c:1531:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:11.868 [2024-02-13 08:25:45.326680] nvme_tcp.c:1532:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:11.868 [2024-02-13 08:25:45.326709] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.326714] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.326717] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.868 [2024-02-13 08:25:45.326726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:11.868 [2024-02-13 08:25:45.326741] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.868 [2024-02-13 08:25:45.334655] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.868 [2024-02-13 08:25:45.334663] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.868 [2024-02-13 08:25:45.334666] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334669] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.868 [2024-02-13 08:25:45.334680] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:11.868 [2024-02-13 08:25:45.334685] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:11.868 [2024-02-13 08:25:45.334689] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:11.868 [2024-02-13 08:25:45.334700] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334703] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334707] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.868 [2024-02-13 08:25:45.334713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.868 [2024-02-13 08:25:45.334724] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.868 [2024-02-13 08:25:45.334925] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.868 [2024-02-13 08:25:45.334933] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.868 [2024-02-13 08:25:45.334936] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334939] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.868 [2024-02-13 08:25:45.334950] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:11.868 [2024-02-13 08:25:45.334958] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:11.868 [2024-02-13 08:25:45.334966] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334969] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.868 [2024-02-13 08:25:45.334972] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.868 [2024-02-13 08:25:45.334979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.868 [2024-02-13 08:25:45.334990] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.335096] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.335103] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.335106] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335109] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.335114] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:11.869 [2024-02-13 08:25:45.335122] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335128] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335131] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335134] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.335140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.869 [2024-02-13 08:25:45.335151] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.335256] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.335262] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.335265] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335268] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.335274] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335283] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335286] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335289] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.335295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.869 [2024-02-13 08:25:45.335306] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.335417] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.335423] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.335426] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335429] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.335433] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:11.869 [2024-02-13 08:25:45.335437] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335447] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335552] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:11.869 [2024-02-13 08:25:45.335555] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335562] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335565] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335568] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.335574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.869 [2024-02-13 08:25:45.335585] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.335698] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.335705] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.335708] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335711] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.335716] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:11.869 [2024-02-13 08:25:45.335725] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335729] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335732] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.335738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.869 [2024-02-13 08:25:45.335749] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.335962] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.335967] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.335970] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.335973] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.335977] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:11.869 [2024-02-13 08:25:45.335981] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:11.869 [2024-02-13 08:25:45.335988] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:11.869 [2024-02-13 08:25:45.335996] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:11.869 [2024-02-13 08:25:45.336003] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336006] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336009] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.336015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.869 [2024-02-13 08:25:45.336024] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.336179] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.869 [2024-02-13 08:25:45.336188] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.869 [2024-02-13 08:25:45.336191] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336194] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=4096, cccid=0 00:25:11.869 [2024-02-13 08:25:45.336198] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f18d0) on tqpair(0x2089b80): expected_datao=0, payload_size=4096 00:25:11.869 [2024-02-13 08:25:45.336205] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336208] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336361] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.336366] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.336368] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336372] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.336380] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:11.869 [2024-02-13 08:25:45.336386] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:11.869 [2024-02-13 08:25:45.336390] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:11.869 [2024-02-13 08:25:45.336394] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:11.869 [2024-02-13 08:25:45.336397] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:11.869 [2024-02-13 08:25:45.336401] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:11.869 [2024-02-13 08:25:45.336409] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:11.869 [2024-02-13 08:25:45.336414] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336418] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336421] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.336427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.869 [2024-02-13 08:25:45.336437] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.869 [2024-02-13 08:25:45.336547] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.869 [2024-02-13 08:25:45.336553] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.869 [2024-02-13 08:25:45.336556] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336559] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f18d0) on tqpair=0x2089b80 00:25:11.869 [2024-02-13 08:25:45.336565] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336569] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336572] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.336577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.869 [2024-02-13 08:25:45.336582] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336585] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336588] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2089b80) 00:25:11.869 [2024-02-13 08:25:45.336593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.869 [2024-02-13 08:25:45.336600] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336604] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.869 [2024-02-13 08:25:45.336607] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.336611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.870 [2024-02-13 08:25:45.336616] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336619] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336622] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.336627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.870 [2024-02-13 08:25:45.336631] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.336641] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.336654] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336657] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336660] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.336666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.870 [2024-02-13 08:25:45.336678] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f18d0, cid 0, qid 0 00:25:11.870 [2024-02-13 08:25:45.336683] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1a30, cid 1, qid 0 00:25:11.870 [2024-02-13 08:25:45.336687] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1b90, cid 2, qid 0 00:25:11.870 [2024-02-13 08:25:45.336691] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.870 [2024-02-13 08:25:45.336694] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.870 [2024-02-13 08:25:45.336835] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.870 [2024-02-13 08:25:45.336841] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.870 [2024-02-13 08:25:45.336844] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336847] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.870 [2024-02-13 08:25:45.336853] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:11.870 [2024-02-13 08:25:45.336857] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.336865] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.336871] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.336876] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336880] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.336883] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.336889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.870 [2024-02-13 08:25:45.336900] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.870 [2024-02-13 08:25:45.337008] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.870 [2024-02-13 08:25:45.337014] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.870 [2024-02-13 08:25:45.337017] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.337021] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.870 [2024-02-13 08:25:45.337062] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.337070] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.337077] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.337080] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.337083] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.337089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.870 [2024-02-13 08:25:45.337100] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.870 [2024-02-13 08:25:45.337215] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.870 [2024-02-13 08:25:45.337222] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.870 [2024-02-13 08:25:45.337225] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.337228] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=4096, cccid=4 00:25:11.870 [2024-02-13 08:25:45.337232] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f1e50) on tqpair(0x2089b80): expected_datao=0, payload_size=4096 00:25:11.870 [2024-02-13 08:25:45.337369] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.337373] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.377825] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.870 [2024-02-13 08:25:45.377840] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.870 [2024-02-13 08:25:45.377843] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.377847] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.870 [2024-02-13 08:25:45.377859] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:11.870 [2024-02-13 08:25:45.377873] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.377882] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.377888] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.377892] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.377895] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.377901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.870 [2024-02-13 08:25:45.377913] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.870 [2024-02-13 08:25:45.378032] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.870 [2024-02-13 08:25:45.378039] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.870 [2024-02-13 08:25:45.378042] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.378045] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=4096, cccid=4 00:25:11.870 [2024-02-13 08:25:45.378048] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f1e50) on tqpair(0x2089b80): expected_datao=0, payload_size=4096 00:25:11.870 [2024-02-13 08:25:45.378205] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.378208] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422654] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.870 [2024-02-13 08:25:45.422662] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.870 [2024-02-13 08:25:45.422665] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422669] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.870 [2024-02-13 08:25:45.422685] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.422693] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.422700] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422704] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422707] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.870 [2024-02-13 08:25:45.422713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.870 [2024-02-13 08:25:45.422725] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.870 [2024-02-13 08:25:45.422901] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.870 [2024-02-13 08:25:45.422908] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.870 [2024-02-13 08:25:45.422910] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422914] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=4096, cccid=4 00:25:11.870 [2024-02-13 08:25:45.422918] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f1e50) on tqpair(0x2089b80): expected_datao=0, payload_size=4096 00:25:11.870 [2024-02-13 08:25:45.422977] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.422982] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.423095] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.870 [2024-02-13 08:25:45.423101] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.870 [2024-02-13 08:25:45.423104] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.870 [2024-02-13 08:25:45.423107] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.870 [2024-02-13 08:25:45.423116] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.423124] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.423132] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.423137] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:11.870 [2024-02-13 08:25:45.423141] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:11.871 [2024-02-13 08:25:45.423146] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:11.871 [2024-02-13 08:25:45.423150] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:11.871 [2024-02-13 08:25:45.423154] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:11.871 [2024-02-13 08:25:45.423168] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423171] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423174] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423186] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423189] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423192] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.871 [2024-02-13 08:25:45.423211] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.871 [2024-02-13 08:25:45.423215] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1fb0, cid 5, qid 0 00:25:11.871 [2024-02-13 08:25:45.423337] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.423343] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.423347] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423350] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.423356] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.423361] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.423364] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423367] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1fb0) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.423375] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423379] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423382] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423399] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1fb0, cid 5, qid 0 00:25:11.871 [2024-02-13 08:25:45.423524] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.423529] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.423532] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423536] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1fb0) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.423545] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423548] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423551] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423568] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1fb0, cid 5, qid 0 00:25:11.871 [2024-02-13 08:25:45.423684] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.423690] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.423693] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423697] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1fb0) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.423708] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423712] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423715] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423731] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1fb0, cid 5, qid 0 00:25:11.871 [2024-02-13 08:25:45.423875] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.423881] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.423884] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423887] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1fb0) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.423899] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423902] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423906] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423918] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423921] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423924] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423935] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423938] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423941] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423952] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423955] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.423958] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2089b80) 00:25:11.871 [2024-02-13 08:25:45.423963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.871 [2024-02-13 08:25:45.423975] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1fb0, cid 5, qid 0 00:25:11.871 [2024-02-13 08:25:45.423979] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1e50, cid 4, qid 0 00:25:11.871 [2024-02-13 08:25:45.423983] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f2110, cid 6, qid 0 00:25:11.871 [2024-02-13 08:25:45.423987] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f2270, cid 7, qid 0 00:25:11.871 [2024-02-13 08:25:45.424212] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.871 [2024-02-13 08:25:45.424220] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.871 [2024-02-13 08:25:45.424223] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424226] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=8192, cccid=5 00:25:11.871 [2024-02-13 08:25:45.424229] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f1fb0) on tqpair(0x2089b80): expected_datao=0, payload_size=8192 00:25:11.871 [2024-02-13 08:25:45.424239] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424242] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424247] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.871 [2024-02-13 08:25:45.424251] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.871 [2024-02-13 08:25:45.424254] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424257] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=512, cccid=4 00:25:11.871 [2024-02-13 08:25:45.424261] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f1e50) on tqpair(0x2089b80): expected_datao=0, payload_size=512 00:25:11.871 [2024-02-13 08:25:45.424266] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424269] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424274] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.871 [2024-02-13 08:25:45.424278] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.871 [2024-02-13 08:25:45.424281] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424284] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=512, cccid=6 00:25:11.871 [2024-02-13 08:25:45.424288] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f2110) on tqpair(0x2089b80): expected_datao=0, payload_size=512 00:25:11.871 [2024-02-13 08:25:45.424293] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424296] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424301] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.871 [2024-02-13 08:25:45.424305] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.871 [2024-02-13 08:25:45.424308] nvme_tcp.c:1648:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424311] nvme_tcp.c:1649:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2089b80): datao=0, datal=4096, cccid=7 00:25:11.871 [2024-02-13 08:25:45.424315] nvme_tcp.c:1660:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20f2270) on tqpair(0x2089b80): expected_datao=0, payload_size=4096 00:25:11.871 [2024-02-13 08:25:45.424320] nvme_tcp.c:1451:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424323] nvme_tcp.c:1235:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424428] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.871 [2024-02-13 08:25:45.424433] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.871 [2024-02-13 08:25:45.424436] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.871 [2024-02-13 08:25:45.424439] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1fb0) on tqpair=0x2089b80 00:25:11.871 [2024-02-13 08:25:45.424451] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.872 [2024-02-13 08:25:45.424456] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.872 [2024-02-13 08:25:45.424459] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.872 [2024-02-13 08:25:45.424462] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1e50) on tqpair=0x2089b80 00:25:11.872 [2024-02-13 08:25:45.424470] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.872 [2024-02-13 08:25:45.424475] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.872 [2024-02-13 08:25:45.424477] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.872 [2024-02-13 08:25:45.424480] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f2110) on tqpair=0x2089b80 00:25:11.872 [2024-02-13 08:25:45.424487] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.872 [2024-02-13 08:25:45.424491] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.872 [2024-02-13 08:25:45.424494] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.872 [2024-02-13 08:25:45.424498] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f2270) on tqpair=0x2089b80 00:25:11.872 ===================================================== 00:25:11.872 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.872 ===================================================== 00:25:11.872 Controller Capabilities/Features 00:25:11.872 ================================ 00:25:11.872 Vendor ID: 8086 00:25:11.872 Subsystem Vendor ID: 8086 00:25:11.872 Serial Number: SPDK00000000000001 00:25:11.872 Model Number: SPDK bdev Controller 00:25:11.872 Firmware Version: 24.05 00:25:11.872 Recommended Arb Burst: 6 00:25:11.872 IEEE OUI Identifier: e4 d2 5c 00:25:11.872 Multi-path I/O 00:25:11.872 May have multiple subsystem ports: Yes 00:25:11.872 May have multiple controllers: Yes 00:25:11.872 Associated with SR-IOV VF: No 00:25:11.872 Max Data Transfer Size: 131072 00:25:11.872 Max Number of Namespaces: 32 00:25:11.872 Max Number of I/O Queues: 127 00:25:11.872 NVMe Specification Version (VS): 1.3 00:25:11.872 NVMe Specification Version (Identify): 1.3 00:25:11.872 Maximum Queue Entries: 128 00:25:11.872 Contiguous Queues Required: Yes 00:25:11.872 Arbitration Mechanisms Supported 00:25:11.872 Weighted Round Robin: Not Supported 00:25:11.872 Vendor Specific: Not Supported 00:25:11.872 Reset Timeout: 15000 ms 00:25:11.872 Doorbell Stride: 4 bytes 00:25:11.872 NVM Subsystem Reset: Not Supported 00:25:11.872 Command Sets Supported 00:25:11.872 NVM Command Set: Supported 00:25:11.872 Boot Partition: Not Supported 00:25:11.872 Memory Page Size Minimum: 4096 bytes 00:25:11.872 Memory Page Size Maximum: 4096 bytes 00:25:11.872 Persistent Memory Region: Not Supported 00:25:11.872 Optional Asynchronous Events Supported 00:25:11.872 Namespace Attribute Notices: Supported 00:25:11.872 Firmware Activation Notices: Not Supported 00:25:11.872 ANA Change Notices: Not Supported 00:25:11.872 PLE Aggregate Log Change Notices: Not Supported 00:25:11.872 LBA Status Info Alert Notices: Not Supported 00:25:11.872 EGE Aggregate Log Change Notices: Not Supported 00:25:11.872 Normal NVM Subsystem Shutdown event: Not Supported 00:25:11.872 Zone Descriptor Change Notices: Not Supported 00:25:11.872 Discovery Log Change Notices: Not Supported 00:25:11.872 Controller Attributes 00:25:11.872 128-bit Host Identifier: Supported 00:25:11.872 Non-Operational Permissive Mode: Not Supported 00:25:11.872 NVM Sets: Not Supported 00:25:11.872 Read Recovery Levels: Not Supported 00:25:11.872 Endurance Groups: Not Supported 00:25:11.872 Predictable Latency Mode: Not Supported 00:25:11.872 Traffic Based Keep ALive: Not Supported 00:25:11.872 Namespace Granularity: Not Supported 00:25:11.872 SQ Associations: Not Supported 00:25:11.872 UUID List: Not Supported 00:25:11.872 Multi-Domain Subsystem: Not Supported 00:25:11.872 Fixed Capacity Management: Not Supported 00:25:11.872 Variable Capacity Management: Not Supported 00:25:11.872 Delete Endurance Group: Not Supported 00:25:11.872 Delete NVM Set: Not Supported 00:25:11.872 Extended LBA Formats Supported: Not Supported 00:25:11.872 Flexible Data Placement Supported: Not Supported 00:25:11.872 00:25:11.872 Controller Memory Buffer Support 00:25:11.872 ================================ 00:25:11.872 Supported: No 00:25:11.872 00:25:11.872 Persistent Memory Region Support 00:25:11.872 ================================ 00:25:11.872 Supported: No 00:25:11.872 00:25:11.872 Admin Command Set Attributes 00:25:11.872 ============================ 00:25:11.872 Security Send/Receive: Not Supported 00:25:11.872 Format NVM: Not Supported 00:25:11.872 Firmware Activate/Download: Not Supported 00:25:11.872 Namespace Management: Not Supported 00:25:11.872 Device Self-Test: Not Supported 00:25:11.872 Directives: Not Supported 00:25:11.872 NVMe-MI: Not Supported 00:25:11.872 Virtualization Management: Not Supported 00:25:11.872 Doorbell Buffer Config: Not Supported 00:25:11.872 Get LBA Status Capability: Not Supported 00:25:11.872 Command & Feature Lockdown Capability: Not Supported 00:25:11.872 Abort Command Limit: 4 00:25:11.872 Async Event Request Limit: 4 00:25:11.872 Number of Firmware Slots: N/A 00:25:11.872 Firmware Slot 1 Read-Only: N/A 00:25:11.872 Firmware Activation Without Reset: N/A 00:25:11.872 Multiple Update Detection Support: N/A 00:25:11.872 Firmware Update Granularity: No Information Provided 00:25:11.872 Per-Namespace SMART Log: No 00:25:11.872 Asymmetric Namespace Access Log Page: Not Supported 00:25:11.872 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:11.872 Command Effects Log Page: Supported 00:25:11.872 Get Log Page Extended Data: Supported 00:25:11.872 Telemetry Log Pages: Not Supported 00:25:11.872 Persistent Event Log Pages: Not Supported 00:25:11.872 Supported Log Pages Log Page: May Support 00:25:11.872 Commands Supported & Effects Log Page: Not Supported 00:25:11.872 Feature Identifiers & Effects Log Page:May Support 00:25:11.872 NVMe-MI Commands & Effects Log Page: May Support 00:25:11.872 Data Area 4 for Telemetry Log: Not Supported 00:25:11.872 Error Log Page Entries Supported: 128 00:25:11.872 Keep Alive: Supported 00:25:11.872 Keep Alive Granularity: 10000 ms 00:25:11.872 00:25:11.872 NVM Command Set Attributes 00:25:11.872 ========================== 00:25:11.872 Submission Queue Entry Size 00:25:11.872 Max: 64 00:25:11.872 Min: 64 00:25:11.872 Completion Queue Entry Size 00:25:11.872 Max: 16 00:25:11.872 Min: 16 00:25:11.872 Number of Namespaces: 32 00:25:11.872 Compare Command: Supported 00:25:11.872 Write Uncorrectable Command: Not Supported 00:25:11.872 Dataset Management Command: Supported 00:25:11.872 Write Zeroes Command: Supported 00:25:11.872 Set Features Save Field: Not Supported 00:25:11.872 Reservations: Supported 00:25:11.872 Timestamp: Not Supported 00:25:11.872 Copy: Supported 00:25:11.872 Volatile Write Cache: Present 00:25:11.872 Atomic Write Unit (Normal): 1 00:25:11.872 Atomic Write Unit (PFail): 1 00:25:11.872 Atomic Compare & Write Unit: 1 00:25:11.872 Fused Compare & Write: Supported 00:25:11.872 Scatter-Gather List 00:25:11.872 SGL Command Set: Supported 00:25:11.872 SGL Keyed: Supported 00:25:11.872 SGL Bit Bucket Descriptor: Not Supported 00:25:11.872 SGL Metadata Pointer: Not Supported 00:25:11.872 Oversized SGL: Not Supported 00:25:11.872 SGL Metadata Address: Not Supported 00:25:11.872 SGL Offset: Supported 00:25:11.872 Transport SGL Data Block: Not Supported 00:25:11.872 Replay Protected Memory Block: Not Supported 00:25:11.872 00:25:11.872 Firmware Slot Information 00:25:11.872 ========================= 00:25:11.872 Active slot: 1 00:25:11.872 Slot 1 Firmware Revision: 24.05 00:25:11.872 00:25:11.872 00:25:11.872 Commands Supported and Effects 00:25:11.872 ============================== 00:25:11.872 Admin Commands 00:25:11.872 -------------- 00:25:11.872 Get Log Page (02h): Supported 00:25:11.872 Identify (06h): Supported 00:25:11.872 Abort (08h): Supported 00:25:11.872 Set Features (09h): Supported 00:25:11.872 Get Features (0Ah): Supported 00:25:11.872 Asynchronous Event Request (0Ch): Supported 00:25:11.872 Keep Alive (18h): Supported 00:25:11.872 I/O Commands 00:25:11.872 ------------ 00:25:11.872 Flush (00h): Supported LBA-Change 00:25:11.872 Write (01h): Supported LBA-Change 00:25:11.872 Read (02h): Supported 00:25:11.872 Compare (05h): Supported 00:25:11.872 Write Zeroes (08h): Supported LBA-Change 00:25:11.872 Dataset Management (09h): Supported LBA-Change 00:25:11.872 Copy (19h): Supported LBA-Change 00:25:11.872 Unknown (79h): Supported LBA-Change 00:25:11.872 Unknown (7Ah): Supported 00:25:11.872 00:25:11.872 Error Log 00:25:11.872 ========= 00:25:11.872 00:25:11.872 Arbitration 00:25:11.872 =========== 00:25:11.872 Arbitration Burst: 1 00:25:11.872 00:25:11.872 Power Management 00:25:11.872 ================ 00:25:11.872 Number of Power States: 1 00:25:11.873 Current Power State: Power State #0 00:25:11.873 Power State #0: 00:25:11.873 Max Power: 0.00 W 00:25:11.873 Non-Operational State: Operational 00:25:11.873 Entry Latency: Not Reported 00:25:11.873 Exit Latency: Not Reported 00:25:11.873 Relative Read Throughput: 0 00:25:11.873 Relative Read Latency: 0 00:25:11.873 Relative Write Throughput: 0 00:25:11.873 Relative Write Latency: 0 00:25:11.873 Idle Power: Not Reported 00:25:11.873 Active Power: Not Reported 00:25:11.873 Non-Operational Permissive Mode: Not Supported 00:25:11.873 00:25:11.873 Health Information 00:25:11.873 ================== 00:25:11.873 Critical Warnings: 00:25:11.873 Available Spare Space: OK 00:25:11.873 Temperature: OK 00:25:11.873 Device Reliability: OK 00:25:11.873 Read Only: No 00:25:11.873 Volatile Memory Backup: OK 00:25:11.873 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:11.873 Temperature Threshold: [2024-02-13 08:25:45.424584] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424588] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424592] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.424598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.424610] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f2270, cid 7, qid 0 00:25:11.873 [2024-02-13 08:25:45.424769] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.424777] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.424780] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424783] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f2270) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.424811] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:11.873 [2024-02-13 08:25:45.424821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.873 [2024-02-13 08:25:45.424827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.873 [2024-02-13 08:25:45.424832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.873 [2024-02-13 08:25:45.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.873 [2024-02-13 08:25:45.424844] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424847] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424850] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.424857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.424869] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.424980] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.424986] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.424989] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.424992] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.424998] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425001] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425004] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.425010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.425024] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.425172] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.425177] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.425180] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425183] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.425188] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:11.873 [2024-02-13 08:25:45.425194] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:11.873 [2024-02-13 08:25:45.425203] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425207] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425210] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.425216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.425226] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.425371] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.425377] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.425380] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425383] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.425393] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425396] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425399] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.425405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.425415] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.425524] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.425529] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.425532] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425535] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.425545] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425549] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425552] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.425557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.425567] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.425677] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.425684] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.425687] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425690] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.873 [2024-02-13 08:25:45.425700] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425703] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.873 [2024-02-13 08:25:45.425706] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.873 [2024-02-13 08:25:45.425712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.873 [2024-02-13 08:25:45.425723] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.873 [2024-02-13 08:25:45.425878] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.873 [2024-02-13 08:25:45.425883] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.873 [2024-02-13 08:25:45.425886] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.425889] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.425913] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.425916] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.425919] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.425925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.425936] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.426077] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.426082] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.426085] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426088] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.426098] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426101] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426104] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.426110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.426119] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.426229] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.426234] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.426237] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426240] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.426250] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426253] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426256] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.426262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.426272] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.426371] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.426377] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.426379] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426382] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.426391] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426395] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426398] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.426404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.426414] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.426533] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.426539] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.426541] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426545] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.426554] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426559] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.426562] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.426568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.426578] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.430654] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.430665] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.430668] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.430672] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.430683] nvme_tcp.c: 737:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.430686] nvme_tcp.c: 891:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.430690] nvme_tcp.c: 900:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2089b80) 00:25:11.874 [2024-02-13 08:25:45.430696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.874 [2024-02-13 08:25:45.430708] nvme_tcp.c: 870:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20f1cf0, cid 3, qid 0 00:25:11.874 [2024-02-13 08:25:45.430933] nvme_tcp.c:1103:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.874 [2024-02-13 08:25:45.430940] nvme_tcp.c:1886:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.874 [2024-02-13 08:25:45.430943] nvme_tcp.c:1578:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.874 [2024-02-13 08:25:45.430946] nvme_tcp.c: 855:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20f1cf0) on tqpair=0x2089b80 00:25:11.874 [2024-02-13 08:25:45.430954] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:11.874 0 Kelvin (-273 Celsius) 00:25:11.874 Available Spare: 0% 00:25:11.874 Available Spare Threshold: 0% 00:25:11.874 Life Percentage Used: 0% 00:25:11.874 Data Units Read: 0 00:25:11.874 Data Units Written: 0 00:25:11.874 Host Read Commands: 0 00:25:11.874 Host Write Commands: 0 00:25:11.874 Controller Busy Time: 0 minutes 00:25:11.874 Power Cycles: 0 00:25:11.874 Power On Hours: 0 hours 00:25:11.874 Unsafe Shutdowns: 0 00:25:11.874 Unrecoverable Media Errors: 0 00:25:11.874 Lifetime Error Log Entries: 0 00:25:11.874 Warning Temperature Time: 0 minutes 00:25:11.874 Critical Temperature Time: 0 minutes 00:25:11.874 00:25:11.874 Number of Queues 00:25:11.874 ================ 00:25:11.874 Number of I/O Submission Queues: 127 00:25:11.874 Number of I/O Completion Queues: 127 00:25:11.874 00:25:11.874 Active Namespaces 00:25:11.874 ================= 00:25:11.874 Namespace ID:1 00:25:11.874 Error Recovery Timeout: Unlimited 00:25:11.874 Command Set Identifier: NVM (00h) 00:25:11.874 Deallocate: Supported 00:25:11.874 Deallocated/Unwritten Error: Not Supported 00:25:11.874 Deallocated Read Value: Unknown 00:25:11.874 Deallocate in Write Zeroes: Not Supported 00:25:11.874 Deallocated Guard Field: 0xFFFF 00:25:11.874 Flush: Supported 00:25:11.874 Reservation: Supported 00:25:11.874 Namespace Sharing Capabilities: Multiple Controllers 00:25:11.874 Size (in LBAs): 131072 (0GiB) 00:25:11.874 Capacity (in LBAs): 131072 (0GiB) 00:25:11.874 Utilization (in LBAs): 131072 (0GiB) 00:25:11.874 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:11.874 EUI64: ABCDEF0123456789 00:25:11.874 UUID: f0157a1b-2050-4530-b842-285c013cf2b5 00:25:11.874 Thin Provisioning: Not Supported 00:25:11.874 Per-NS Atomic Units: Yes 00:25:11.874 Atomic Boundary Size (Normal): 0 00:25:11.874 Atomic Boundary Size (PFail): 0 00:25:11.874 Atomic Boundary Offset: 0 00:25:11.874 Maximum Single Source Range Length: 65535 00:25:11.874 Maximum Copy Length: 65535 00:25:11.874 Maximum Source Range Count: 1 00:25:11.874 NGUID/EUI64 Never Reused: No 00:25:11.874 Namespace Write Protected: No 00:25:11.874 Number of LBA Formats: 1 00:25:11.874 Current LBA Format: LBA Format #00 00:25:11.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:11.874 00:25:11.874 08:25:45 -- host/identify.sh@51 -- # sync 00:25:11.874 08:25:45 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.874 08:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.874 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:25:11.874 08:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.874 08:25:45 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:11.874 08:25:45 -- host/identify.sh@56 -- # nvmftestfini 00:25:11.874 08:25:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:11.874 08:25:45 -- nvmf/common.sh@116 -- # sync 00:25:11.874 08:25:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:11.874 08:25:45 -- nvmf/common.sh@119 -- # set +e 00:25:11.874 08:25:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:11.874 08:25:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:11.874 rmmod nvme_tcp 00:25:11.874 rmmod nvme_fabrics 00:25:11.874 rmmod nvme_keyring 00:25:11.874 08:25:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:11.874 08:25:45 -- nvmf/common.sh@123 -- # set -e 00:25:11.874 08:25:45 -- nvmf/common.sh@124 -- # return 0 00:25:11.874 08:25:45 -- nvmf/common.sh@477 -- # '[' -n 2376799 ']' 00:25:11.874 08:25:45 -- nvmf/common.sh@478 -- # killprocess 2376799 00:25:11.874 08:25:45 -- common/autotest_common.sh@924 -- # '[' -z 2376799 ']' 00:25:11.874 08:25:45 -- common/autotest_common.sh@928 -- # kill -0 2376799 00:25:11.874 08:25:45 -- common/autotest_common.sh@929 -- # uname 00:25:11.874 08:25:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:11.874 08:25:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2376799 00:25:12.183 08:25:45 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:12.183 08:25:45 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:12.183 08:25:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2376799' 00:25:12.183 killing process with pid 2376799 00:25:12.183 08:25:45 -- common/autotest_common.sh@943 -- # kill 2376799 00:25:12.183 [2024-02-13 08:25:45.554660] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.183 08:25:45 -- common/autotest_common.sh@948 -- # wait 2376799 00:25:12.183 08:25:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:12.183 08:25:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:12.183 08:25:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:12.183 08:25:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.183 08:25:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:12.183 08:25:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.183 08:25:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.183 08:25:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.724 08:25:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:14.724 00:25:14.724 real 0m9.952s 00:25:14.724 user 0m7.647s 00:25:14.724 sys 0m4.975s 00:25:14.724 08:25:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:14.724 08:25:47 -- common/autotest_common.sh@10 -- # set +x 00:25:14.724 ************************************ 00:25:14.724 END TEST nvmf_identify 00:25:14.724 ************************************ 00:25:14.724 08:25:47 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:14.724 08:25:47 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:25:14.724 08:25:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:14.724 08:25:47 -- common/autotest_common.sh@10 -- # set +x 00:25:14.724 ************************************ 00:25:14.724 START TEST nvmf_perf 00:25:14.724 ************************************ 00:25:14.724 08:25:47 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:14.724 * Looking for test storage... 00:25:14.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.724 08:25:47 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.724 08:25:47 -- nvmf/common.sh@7 -- # uname -s 00:25:14.724 08:25:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.724 08:25:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.724 08:25:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.724 08:25:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.724 08:25:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.724 08:25:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.724 08:25:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.724 08:25:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.724 08:25:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.724 08:25:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.724 08:25:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:14.724 08:25:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:14.724 08:25:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.724 08:25:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.724 08:25:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.724 08:25:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.724 08:25:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.724 08:25:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.724 08:25:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.724 08:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.724 08:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.724 08:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.724 08:25:47 -- paths/export.sh@5 -- # export PATH 00:25:14.724 08:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.724 08:25:47 -- nvmf/common.sh@46 -- # : 0 00:25:14.724 08:25:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:14.724 08:25:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:14.724 08:25:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:14.724 08:25:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.724 08:25:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.724 08:25:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:14.724 08:25:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:14.724 08:25:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:14.724 08:25:48 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:14.724 08:25:48 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:14.724 08:25:48 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.724 08:25:48 -- host/perf.sh@17 -- # nvmftestinit 00:25:14.724 08:25:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:14.724 08:25:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.724 08:25:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:14.724 08:25:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:14.724 08:25:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:14.724 08:25:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.724 08:25:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.724 08:25:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.724 08:25:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:14.724 08:25:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:14.724 08:25:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:14.724 08:25:48 -- common/autotest_common.sh@10 -- # set +x 00:25:21.300 08:25:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.300 08:25:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.300 08:25:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.300 08:25:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.300 08:25:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.300 08:25:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.300 08:25:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.300 08:25:53 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.300 08:25:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.300 08:25:53 -- nvmf/common.sh@295 -- # e810=() 00:25:21.300 08:25:53 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.300 08:25:53 -- nvmf/common.sh@296 -- # x722=() 00:25:21.300 08:25:53 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.300 08:25:53 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.300 08:25:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.300 08:25:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.300 08:25:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.300 08:25:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:21.300 08:25:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.300 08:25:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.300 08:25:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:21.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:21.300 08:25:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.300 08:25:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:21.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:21.300 08:25:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.300 08:25:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:21.300 08:25:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.300 08:25:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.300 08:25:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.300 08:25:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.300 08:25:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:21.300 Found net devices under 0000:af:00.0: cvl_0_0 00:25:21.300 08:25:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.300 08:25:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.300 08:25:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.300 08:25:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.300 08:25:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.300 08:25:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:21.300 Found net devices under 0000:af:00.1: cvl_0_1 00:25:21.300 08:25:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.300 08:25:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.300 08:25:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.301 08:25:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.301 08:25:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:21.301 08:25:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:21.301 08:25:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.301 08:25:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.301 08:25:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.301 08:25:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:21.301 08:25:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.301 08:25:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.301 08:25:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:21.301 08:25:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.301 08:25:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.301 08:25:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:21.301 08:25:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:21.301 08:25:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.301 08:25:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.301 08:25:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.301 08:25:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.301 08:25:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:21.301 08:25:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.301 08:25:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.301 08:25:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.301 08:25:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:21.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:25:21.301 00:25:21.301 --- 10.0.0.2 ping statistics --- 00:25:21.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.301 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:21.301 08:25:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:25:21.301 00:25:21.301 --- 10.0.0.1 ping statistics --- 00:25:21.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.301 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:21.301 08:25:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.301 08:25:54 -- nvmf/common.sh@410 -- # return 0 00:25:21.301 08:25:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.301 08:25:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.301 08:25:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:21.301 08:25:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:21.301 08:25:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.301 08:25:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:21.301 08:25:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:21.301 08:25:54 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.301 08:25:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.301 08:25:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:21.301 08:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:21.301 08:25:54 -- nvmf/common.sh@469 -- # nvmfpid=2380843 00:25:21.301 08:25:54 -- nvmf/common.sh@470 -- # waitforlisten 2380843 00:25:21.301 08:25:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.301 08:25:54 -- common/autotest_common.sh@817 -- # '[' -z 2380843 ']' 00:25:21.301 08:25:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.301 08:25:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.301 08:25:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.301 08:25:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.301 08:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:21.301 [2024-02-13 08:25:54.276288] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:21.301 [2024-02-13 08:25:54.276331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.301 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.301 [2024-02-13 08:25:54.338246] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.301 [2024-02-13 08:25:54.414751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.301 [2024-02-13 08:25:54.414855] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.301 [2024-02-13 08:25:54.414864] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.301 [2024-02-13 08:25:54.414869] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.301 [2024-02-13 08:25:54.414911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.301 [2024-02-13 08:25:54.415004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.301 [2024-02-13 08:25:54.415094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.301 [2024-02-13 08:25:54.415095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.561 08:25:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.561 08:25:55 -- common/autotest_common.sh@850 -- # return 0 00:25:21.561 08:25:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:21.561 08:25:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:21.561 08:25:55 -- common/autotest_common.sh@10 -- # set +x 00:25:21.561 08:25:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.561 08:25:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:21.561 08:25:55 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:24.851 08:25:58 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:24.851 08:25:58 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:24.851 08:25:58 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:24.852 08:25:58 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:24.852 08:25:58 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:24.852 08:25:58 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:24.852 08:25:58 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:24.852 08:25:58 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:24.852 08:25:58 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:25.110 [2024-02-13 08:25:58.637058] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.110 08:25:58 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.370 08:25:58 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.370 08:25:58 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.370 08:25:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.370 08:25:59 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:25.629 08:25:59 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.888 [2024-02-13 08:25:59.360729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.888 08:25:59 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:25.888 08:25:59 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:25.888 08:25:59 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:25.888 08:25:59 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:25.888 08:25:59 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:27.269 Initializing NVMe Controllers 00:25:27.269 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:27.269 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:27.269 Initialization complete. Launching workers. 00:25:27.269 ======================================================== 00:25:27.269 Latency(us) 00:25:27.269 Device Information : IOPS MiB/s Average min max 00:25:27.269 PCIE (0000:5e:00.0) NSID 1 from core 0: 102277.42 399.52 312.45 9.28 4566.17 00:25:27.269 ======================================================== 00:25:27.269 Total : 102277.42 399.52 312.45 9.28 4566.17 00:25:27.269 00:25:27.269 08:26:00 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.269 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.651 Initializing NVMe Controllers 00:25:28.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:28.651 Initialization complete. Launching workers. 00:25:28.651 ======================================================== 00:25:28.651 Latency(us) 00:25:28.651 Device Information : IOPS MiB/s Average min max 00:25:28.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12698.52 331.49 45944.04 00:25:28.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19715.91 7960.47 47902.75 00:25:28.651 ======================================================== 00:25:28.651 Total : 131.00 0.51 15430.48 331.49 47902.75 00:25:28.651 00:25:28.651 08:26:02 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.651 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.032 Initializing NVMe Controllers 00:25:30.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.032 Initialization complete. Launching workers. 00:25:30.032 ======================================================== 00:25:30.032 Latency(us) 00:25:30.032 Device Information : IOPS MiB/s Average min max 00:25:30.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8659.99 33.83 3704.33 724.13 8463.68 00:25:30.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3933.00 15.36 8181.16 7312.12 15896.20 00:25:30.032 ======================================================== 00:25:30.032 Total : 12592.99 49.19 5102.52 724.13 15896.20 00:25:30.032 00:25:30.032 08:26:03 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:30.032 08:26:03 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:30.032 08:26:03 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.032 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.571 Initializing NVMe Controllers 00:25:32.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.571 Controller IO queue size 128, less than required. 00:25:32.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.571 Controller IO queue size 128, less than required. 00:25:32.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.571 Initialization complete. Launching workers. 00:25:32.571 ======================================================== 00:25:32.571 Latency(us) 00:25:32.571 Device Information : IOPS MiB/s Average min max 00:25:32.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.00 238.00 138324.01 83281.88 233022.68 00:25:32.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.50 151.62 221890.03 71640.76 359967.87 00:25:32.571 ======================================================== 00:25:32.571 Total : 1558.50 389.62 170844.25 71640.76 359967.87 00:25:32.571 00:25:32.571 08:26:05 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:32.571 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.571 No valid NVMe controllers or AIO or URING devices found 00:25:32.571 Initializing NVMe Controllers 00:25:32.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.571 Controller IO queue size 128, less than required. 00:25:32.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.571 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:32.571 Controller IO queue size 128, less than required. 00:25:32.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.571 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:32.571 WARNING: Some requested NVMe devices were skipped 00:25:32.571 08:26:06 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:32.571 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.111 Initializing NVMe Controllers 00:25:35.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.111 Controller IO queue size 128, less than required. 00:25:35.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.111 Controller IO queue size 128, less than required. 00:25:35.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.111 Initialization complete. Launching workers. 00:25:35.111 00:25:35.111 ==================== 00:25:35.111 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:35.111 TCP transport: 00:25:35.111 polls: 51798 00:25:35.111 idle_polls: 15636 00:25:35.111 sock_completions: 36162 00:25:35.111 nvme_completions: 3963 00:25:35.111 submitted_requests: 5972 00:25:35.111 queued_requests: 1 00:25:35.111 00:25:35.111 ==================== 00:25:35.111 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:35.111 TCP transport: 00:25:35.111 polls: 53486 00:25:35.111 idle_polls: 15445 00:25:35.111 sock_completions: 38041 00:25:35.111 nvme_completions: 3949 00:25:35.111 submitted_requests: 5882 00:25:35.111 queued_requests: 1 00:25:35.111 ======================================================== 00:25:35.111 Latency(us) 00:25:35.111 Device Information : IOPS MiB/s Average min max 00:25:35.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 990.49 247.62 133146.33 75328.31 201220.48 00:25:35.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 986.99 246.75 133471.84 71758.61 198840.24 00:25:35.111 ======================================================== 00:25:35.111 Total : 1977.48 494.37 133308.80 71758.61 201220.48 00:25:35.111 00:25:35.111 08:26:08 -- host/perf.sh@66 -- # sync 00:25:35.111 08:26:08 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.371 08:26:08 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:35.371 08:26:08 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:25:35.371 08:26:08 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:38.736 08:26:12 -- host/perf.sh@72 -- # ls_guid=c37dcb48-c8a7-407f-9462-4b3e4f838dfb 00:25:38.736 08:26:12 -- host/perf.sh@73 -- # get_lvs_free_mb c37dcb48-c8a7-407f-9462-4b3e4f838dfb 00:25:38.736 08:26:12 -- common/autotest_common.sh@1341 -- # local lvs_uuid=c37dcb48-c8a7-407f-9462-4b3e4f838dfb 00:25:38.736 08:26:12 -- common/autotest_common.sh@1342 -- # local lvs_info 00:25:38.736 08:26:12 -- common/autotest_common.sh@1343 -- # local fc 00:25:38.736 08:26:12 -- common/autotest_common.sh@1344 -- # local cs 00:25:38.736 08:26:12 -- common/autotest_common.sh@1345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:38.736 08:26:12 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:25:38.736 { 00:25:38.736 "uuid": "c37dcb48-c8a7-407f-9462-4b3e4f838dfb", 00:25:38.736 "name": "lvs_0", 00:25:38.736 "base_bdev": "Nvme0n1", 00:25:38.736 "total_data_clusters": 238234, 00:25:38.736 "free_clusters": 238234, 00:25:38.736 "block_size": 512, 00:25:38.736 "cluster_size": 4194304 00:25:38.736 } 00:25:38.736 ]' 00:25:38.736 08:26:12 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="c37dcb48-c8a7-407f-9462-4b3e4f838dfb") .free_clusters' 00:25:38.736 08:26:12 -- common/autotest_common.sh@1346 -- # fc=238234 00:25:38.736 08:26:12 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="c37dcb48-c8a7-407f-9462-4b3e4f838dfb") .cluster_size' 00:25:38.736 08:26:12 -- common/autotest_common.sh@1347 -- # cs=4194304 00:25:38.736 08:26:12 -- common/autotest_common.sh@1350 -- # free_mb=952936 00:25:38.736 08:26:12 -- common/autotest_common.sh@1351 -- # echo 952936 00:25:38.736 952936 00:25:38.736 08:26:12 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:38.736 08:26:12 -- host/perf.sh@78 -- # free_mb=20480 00:25:38.736 08:26:12 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c37dcb48-c8a7-407f-9462-4b3e4f838dfb lbd_0 20480 00:25:39.304 08:26:12 -- host/perf.sh@80 -- # lb_guid=e59b476d-ee7c-4fde-b9bc-8f0aaa92c98e 00:25:39.304 08:26:12 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e59b476d-ee7c-4fde-b9bc-8f0aaa92c98e lvs_n_0 00:25:39.872 08:26:13 -- host/perf.sh@83 -- # ls_nested_guid=ca7ea27b-576b-4fc7-912e-16ab091416ef 00:25:39.872 08:26:13 -- host/perf.sh@84 -- # get_lvs_free_mb ca7ea27b-576b-4fc7-912e-16ab091416ef 00:25:39.872 08:26:13 -- common/autotest_common.sh@1341 -- # local lvs_uuid=ca7ea27b-576b-4fc7-912e-16ab091416ef 00:25:39.872 08:26:13 -- common/autotest_common.sh@1342 -- # local lvs_info 00:25:39.872 08:26:13 -- common/autotest_common.sh@1343 -- # local fc 00:25:39.872 08:26:13 -- common/autotest_common.sh@1344 -- # local cs 00:25:39.872 08:26:13 -- common/autotest_common.sh@1345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:40.132 08:26:13 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:25:40.132 { 00:25:40.132 "uuid": "c37dcb48-c8a7-407f-9462-4b3e4f838dfb", 00:25:40.132 "name": "lvs_0", 00:25:40.132 "base_bdev": "Nvme0n1", 00:25:40.132 "total_data_clusters": 238234, 00:25:40.132 "free_clusters": 233114, 00:25:40.132 "block_size": 512, 00:25:40.132 "cluster_size": 4194304 00:25:40.132 }, 00:25:40.132 { 00:25:40.132 "uuid": "ca7ea27b-576b-4fc7-912e-16ab091416ef", 00:25:40.132 "name": "lvs_n_0", 00:25:40.132 "base_bdev": "e59b476d-ee7c-4fde-b9bc-8f0aaa92c98e", 00:25:40.132 "total_data_clusters": 5114, 00:25:40.132 "free_clusters": 5114, 00:25:40.132 "block_size": 512, 00:25:40.132 "cluster_size": 4194304 00:25:40.132 } 00:25:40.132 ]' 00:25:40.132 08:26:13 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="ca7ea27b-576b-4fc7-912e-16ab091416ef") .free_clusters' 00:25:40.132 08:26:13 -- common/autotest_common.sh@1346 -- # fc=5114 00:25:40.132 08:26:13 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="ca7ea27b-576b-4fc7-912e-16ab091416ef") .cluster_size' 00:25:40.132 08:26:13 -- common/autotest_common.sh@1347 -- # cs=4194304 00:25:40.132 08:26:13 -- common/autotest_common.sh@1350 -- # free_mb=20456 00:25:40.132 08:26:13 -- common/autotest_common.sh@1351 -- # echo 20456 00:25:40.132 20456 00:25:40.132 08:26:13 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:40.132 08:26:13 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ca7ea27b-576b-4fc7-912e-16ab091416ef lbd_nest_0 20456 00:25:40.392 08:26:13 -- host/perf.sh@88 -- # lb_nested_guid=4619f400-dd1e-4f82-98b3-9931054cbe3a 00:25:40.392 08:26:13 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.392 08:26:14 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:40.392 08:26:14 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4619f400-dd1e-4f82-98b3-9931054cbe3a 00:25:40.651 08:26:14 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.911 08:26:14 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:40.911 08:26:14 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:40.911 08:26:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:40.911 08:26:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:40.911 08:26:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:40.911 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.126 Initializing NVMe Controllers 00:25:53.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:53.126 Initialization complete. Launching workers. 00:25:53.126 ======================================================== 00:25:53.126 Latency(us) 00:25:53.126 Device Information : IOPS MiB/s Average min max 00:25:53.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.40 0.02 21100.77 262.26 47445.24 00:25:53.126 ======================================================== 00:25:53.126 Total : 47.40 0.02 21100.77 262.26 47445.24 00:25:53.126 00:25:53.126 08:26:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:53.126 08:26:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.126 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.113 Initializing NVMe Controllers 00:26:03.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.113 Initialization complete. Launching workers. 00:26:03.113 ======================================================== 00:26:03.113 Latency(us) 00:26:03.113 Device Information : IOPS MiB/s Average min max 00:26:03.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.68 10.33 12104.31 4006.35 22898.16 00:26:03.113 ======================================================== 00:26:03.113 Total : 82.68 10.33 12104.31 4006.35 22898.16 00:26:03.113 00:26:03.113 08:26:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:03.113 08:26:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:03.113 08:26:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.113 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.101 Initializing NVMe Controllers 00:26:13.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:13.101 Initialization complete. Launching workers. 00:26:13.101 ======================================================== 00:26:13.101 Latency(us) 00:26:13.101 Device Information : IOPS MiB/s Average min max 00:26:13.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8832.72 4.31 3622.51 356.84 11948.37 00:26:13.101 ======================================================== 00:26:13.101 Total : 8832.72 4.31 3622.51 356.84 11948.37 00:26:13.101 00:26:13.101 08:26:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:13.101 08:26:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:13.101 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.111 Initializing NVMe Controllers 00:26:23.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:23.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:23.111 Initialization complete. Launching workers. 00:26:23.111 ======================================================== 00:26:23.111 Latency(us) 00:26:23.111 Device Information : IOPS MiB/s Average min max 00:26:23.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1799.34 224.92 17787.74 1369.73 45113.30 00:26:23.111 ======================================================== 00:26:23.111 Total : 1799.34 224.92 17787.74 1369.73 45113.30 00:26:23.111 00:26:23.111 08:26:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:23.111 08:26:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:23.111 08:26:55 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.111 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.122 Initializing NVMe Controllers 00:26:33.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.122 Controller IO queue size 128, less than required. 00:26:33.122 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:33.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.122 Initialization complete. Launching workers. 00:26:33.122 ======================================================== 00:26:33.122 Latency(us) 00:26:33.122 Device Information : IOPS MiB/s Average min max 00:26:33.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15579.28 7.61 8216.18 1408.54 18864.17 00:26:33.122 ======================================================== 00:26:33.122 Total : 15579.28 7.61 8216.18 1408.54 18864.17 00:26:33.122 00:26:33.122 08:27:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:33.122 08:27:05 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:33.122 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.109 Initializing NVMe Controllers 00:26:43.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.109 Controller IO queue size 128, less than required. 00:26:43.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:43.109 Initialization complete. Launching workers. 00:26:43.109 ======================================================== 00:26:43.109 Latency(us) 00:26:43.109 Device Information : IOPS MiB/s Average min max 00:26:43.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.66 150.71 106694.54 30349.85 213932.73 00:26:43.109 ======================================================== 00:26:43.109 Total : 1205.66 150.71 106694.54 30349.85 213932.73 00:26:43.109 00:26:43.109 08:27:16 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.109 08:27:16 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4619f400-dd1e-4f82-98b3-9931054cbe3a 00:26:43.676 08:27:17 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:43.676 08:27:17 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e59b476d-ee7c-4fde-b9bc-8f0aaa92c98e 00:26:43.935 08:27:17 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:44.194 08:27:17 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:44.194 08:27:17 -- host/perf.sh@114 -- # nvmftestfini 00:26:44.194 08:27:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:44.194 08:27:17 -- nvmf/common.sh@116 -- # sync 00:26:44.194 08:27:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:44.194 08:27:17 -- nvmf/common.sh@119 -- # set +e 00:26:44.194 08:27:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:44.194 08:27:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:44.194 rmmod nvme_tcp 00:26:44.194 rmmod nvme_fabrics 00:26:44.194 rmmod nvme_keyring 00:26:44.194 08:27:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:44.194 08:27:17 -- nvmf/common.sh@123 -- # set -e 00:26:44.194 08:27:17 -- nvmf/common.sh@124 -- # return 0 00:26:44.194 08:27:17 -- nvmf/common.sh@477 -- # '[' -n 2380843 ']' 00:26:44.194 08:27:17 -- nvmf/common.sh@478 -- # killprocess 2380843 00:26:44.194 08:27:17 -- common/autotest_common.sh@924 -- # '[' -z 2380843 ']' 00:26:44.194 08:27:17 -- common/autotest_common.sh@928 -- # kill -0 2380843 00:26:44.194 08:27:17 -- common/autotest_common.sh@929 -- # uname 00:26:44.194 08:27:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:44.194 08:27:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2380843 00:26:44.194 08:27:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:44.194 08:27:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:44.194 08:27:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2380843' 00:26:44.194 killing process with pid 2380843 00:26:44.194 08:27:17 -- common/autotest_common.sh@943 -- # kill 2380843 00:26:44.194 08:27:17 -- common/autotest_common.sh@948 -- # wait 2380843 00:26:46.099 08:27:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:46.100 08:27:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:46.100 08:27:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:46.100 08:27:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.100 08:27:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:46.100 08:27:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.100 08:27:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.100 08:27:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.007 08:27:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:48.007 00:26:48.007 real 1m33.511s 00:26:48.007 user 5m34.766s 00:26:48.007 sys 0m14.566s 00:26:48.007 08:27:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:48.007 08:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.007 ************************************ 00:26:48.007 END TEST nvmf_perf 00:26:48.007 ************************************ 00:26:48.007 08:27:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:48.007 08:27:21 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:26:48.007 08:27:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:48.007 08:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:48.007 ************************************ 00:26:48.007 START TEST nvmf_fio_host 00:26:48.007 ************************************ 00:26:48.007 08:27:21 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:48.007 * Looking for test storage... 00:26:48.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.007 08:27:21 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.007 08:27:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.007 08:27:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.007 08:27:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.007 08:27:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.007 08:27:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.007 08:27:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.007 08:27:21 -- paths/export.sh@5 -- # export PATH 00:26:48.007 08:27:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.007 08:27:21 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.007 08:27:21 -- nvmf/common.sh@7 -- # uname -s 00:26:48.007 08:27:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.007 08:27:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.007 08:27:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.007 08:27:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.007 08:27:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.007 08:27:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.007 08:27:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.007 08:27:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.007 08:27:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.007 08:27:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.007 08:27:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:48.007 08:27:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:48.007 08:27:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.007 08:27:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.007 08:27:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.007 08:27:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.007 08:27:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.007 08:27:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.007 08:27:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.007 08:27:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.008 08:27:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.008 08:27:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.008 08:27:21 -- paths/export.sh@5 -- # export PATH 00:26:48.008 08:27:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.008 08:27:21 -- nvmf/common.sh@46 -- # : 0 00:26:48.008 08:27:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:48.008 08:27:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:48.008 08:27:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:48.008 08:27:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.008 08:27:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.008 08:27:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:48.008 08:27:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:48.008 08:27:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:48.008 08:27:21 -- host/fio.sh@12 -- # nvmftestinit 00:26:48.008 08:27:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:48.008 08:27:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.008 08:27:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:48.008 08:27:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:48.008 08:27:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:48.008 08:27:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.008 08:27:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.008 08:27:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.008 08:27:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:48.008 08:27:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:48.008 08:27:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:48.008 08:27:21 -- common/autotest_common.sh@10 -- # set +x 00:26:54.648 08:27:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:54.648 08:27:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:54.648 08:27:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:54.648 08:27:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:54.648 08:27:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:54.648 08:27:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:54.648 08:27:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:54.648 08:27:27 -- nvmf/common.sh@294 -- # net_devs=() 00:26:54.648 08:27:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:54.648 08:27:27 -- nvmf/common.sh@295 -- # e810=() 00:26:54.648 08:27:27 -- nvmf/common.sh@295 -- # local -ga e810 00:26:54.648 08:27:27 -- nvmf/common.sh@296 -- # x722=() 00:26:54.648 08:27:27 -- nvmf/common.sh@296 -- # local -ga x722 00:26:54.648 08:27:27 -- nvmf/common.sh@297 -- # mlx=() 00:26:54.648 08:27:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:54.648 08:27:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.648 08:27:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:54.648 08:27:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:54.648 08:27:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:54.648 08:27:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:54.648 08:27:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:54.648 08:27:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:54.648 08:27:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.648 08:27:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:54.648 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:54.648 08:27:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.648 08:27:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.649 08:27:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:54.649 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:54.649 08:27:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:54.649 08:27:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.649 08:27:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.649 08:27:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.649 08:27:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.649 08:27:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:54.649 Found net devices under 0000:af:00.0: cvl_0_0 00:26:54.649 08:27:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.649 08:27:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.649 08:27:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.649 08:27:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.649 08:27:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.649 08:27:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:54.649 Found net devices under 0000:af:00.1: cvl_0_1 00:26:54.649 08:27:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.649 08:27:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:54.649 08:27:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:54.649 08:27:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:54.649 08:27:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.649 08:27:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.649 08:27:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.649 08:27:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:54.649 08:27:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.649 08:27:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.649 08:27:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:54.649 08:27:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.649 08:27:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.649 08:27:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:54.649 08:27:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:54.649 08:27:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.649 08:27:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.649 08:27:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.649 08:27:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.649 08:27:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:54.649 08:27:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.649 08:27:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.649 08:27:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.649 08:27:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:54.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:26:54.649 00:26:54.649 --- 10.0.0.2 ping statistics --- 00:26:54.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.649 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:54.649 08:27:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:26:54.649 00:26:54.649 --- 10.0.0.1 ping statistics --- 00:26:54.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.649 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:26:54.649 08:27:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.649 08:27:27 -- nvmf/common.sh@410 -- # return 0 00:26:54.649 08:27:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:54.649 08:27:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.649 08:27:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:54.649 08:27:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.649 08:27:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:54.649 08:27:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:54.649 08:27:27 -- host/fio.sh@14 -- # [[ y != y ]] 00:26:54.649 08:27:27 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:54.649 08:27:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:54.649 08:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:54.649 08:27:27 -- host/fio.sh@22 -- # nvmfpid=2399002 00:26:54.649 08:27:27 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:54.649 08:27:27 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.649 08:27:27 -- host/fio.sh@26 -- # waitforlisten 2399002 00:26:54.649 08:27:27 -- common/autotest_common.sh@817 -- # '[' -z 2399002 ']' 00:26:54.649 08:27:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.649 08:27:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.649 08:27:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.649 08:27:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.649 08:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:54.649 [2024-02-13 08:27:27.587837] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:54.649 [2024-02-13 08:27:27.587881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.649 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.649 [2024-02-13 08:27:27.650815] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.649 [2024-02-13 08:27:27.728328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:54.649 [2024-02-13 08:27:27.728432] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.649 [2024-02-13 08:27:27.728441] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.649 [2024-02-13 08:27:27.728447] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.649 [2024-02-13 08:27:27.728488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.649 [2024-02-13 08:27:27.728584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.649 [2024-02-13 08:27:27.728669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.649 [2024-02-13 08:27:27.728671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.910 08:27:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.910 08:27:28 -- common/autotest_common.sh@850 -- # return 0 00:26:54.910 08:27:28 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 [2024-02-13 08:27:28.400843] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:26:54.910 08:27:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 08:27:28 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 Malloc1 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 [2024-02-13 08:27:28.484289] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:54.910 08:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.910 08:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.910 08:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.910 08:27:28 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:54.910 08:27:28 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:54.910 08:27:28 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:54.910 08:27:28 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:26:54.910 08:27:28 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:54.910 08:27:28 -- common/autotest_common.sh@1316 -- # local sanitizers 00:26:54.910 08:27:28 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.910 08:27:28 -- common/autotest_common.sh@1318 -- # shift 00:26:54.910 08:27:28 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:26:54.910 08:27:28 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # grep libasan 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # asan_lib= 00:26:54.910 08:27:28 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:26:54.910 08:27:28 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:26:54.910 08:27:28 -- common/autotest_common.sh@1322 -- # asan_lib= 00:26:54.910 08:27:28 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:26:54.910 08:27:28 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:54.910 08:27:28 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:55.169 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:55.169 fio-3.35 00:26:55.169 Starting 1 thread 00:26:55.169 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.706 00:26:57.706 test: (groupid=0, jobs=1): err= 0: pid=2399375: Tue Feb 13 08:27:31 2024 00:26:57.706 read: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(96.4MiB/2005msec) 00:26:57.706 slat (nsec): min=1525, max=224337, avg=1709.04, stdev=1969.26 00:26:57.706 clat (usec): min=2974, max=13997, avg=5879.87, stdev=1056.30 00:26:57.706 lat (usec): min=2975, max=13999, avg=5881.58, stdev=1056.36 00:26:57.706 clat percentiles (usec): 00:26:57.706 | 1.00th=[ 4080], 5.00th=[ 4752], 10.00th=[ 5014], 20.00th=[ 5276], 00:26:57.706 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:26:57.706 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6783], 95.00th=[ 8094], 00:26:57.706 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12780], 99.95th=[13566], 00:26:57.706 | 99.99th=[13960] 00:26:57.706 bw ( KiB/s): min=47912, max=50688, per=99.98%, avg=49246.00, stdev=1181.07, samples=4 00:26:57.706 iops : min=11978, max=12672, avg=12311.50, stdev=295.27, samples=4 00:26:57.706 write: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(96.2MiB/2005msec); 0 zone resets 00:26:57.706 slat (nsec): min=1582, max=202950, avg=1802.90, stdev=1472.82 00:26:57.706 clat (usec): min=1851, max=9036, avg=4456.53, stdev=618.92 00:26:57.706 lat (usec): min=1853, max=9037, avg=4458.33, stdev=618.96 00:26:57.706 clat percentiles (usec): 00:26:57.706 | 1.00th=[ 2704], 5.00th=[ 3359], 10.00th=[ 3752], 20.00th=[ 4113], 00:26:57.706 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:26:57.706 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5342], 00:26:57.706 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7898], 00:26:57.706 | 99.99th=[ 8979] 00:26:57.707 bw ( KiB/s): min=48304, max=49920, per=100.00%, avg=49132.00, stdev=837.89, samples=4 00:26:57.707 iops : min=12076, max=12480, avg=12283.00, stdev=209.47, samples=4 00:26:57.707 lat (msec) : 2=0.01%, 4=8.36%, 10=91.06%, 20=0.57% 00:26:57.707 cpu : usr=64.42%, sys=27.99%, ctx=36, majf=0, minf=6 00:26:57.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:57.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:57.707 issued rwts: total=24690,24626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:57.707 00:26:57.707 Run status group 0 (all jobs): 00:26:57.707 READ: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=96.4MiB (101MB), run=2005-2005msec 00:26:57.707 WRITE: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=96.2MiB (101MB), run=2005-2005msec 00:26:57.707 08:27:31 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:57.707 08:27:31 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:57.707 08:27:31 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:26:57.707 08:27:31 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:57.707 08:27:31 -- common/autotest_common.sh@1316 -- # local sanitizers 00:26:57.707 08:27:31 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:57.707 08:27:31 -- common/autotest_common.sh@1318 -- # shift 00:26:57.707 08:27:31 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:26:57.707 08:27:31 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # grep libasan 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # asan_lib= 00:26:57.707 08:27:31 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:26:57.707 08:27:31 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:26:57.707 08:27:31 -- common/autotest_common.sh@1322 -- # asan_lib= 00:26:57.707 08:27:31 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:26:57.707 08:27:31 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:57.707 08:27:31 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:57.967 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:57.967 fio-3.35 00:26:57.967 Starting 1 thread 00:26:57.967 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.504 00:27:00.504 test: (groupid=0, jobs=1): err= 0: pid=2399873: Tue Feb 13 08:27:33 2024 00:27:00.504 read: IOPS=10.3k, BW=161MiB/s (169MB/s)(323MiB/2005msec) 00:27:00.504 slat (nsec): min=2525, max=82120, avg=2899.21, stdev=1276.55 00:27:00.504 clat (usec): min=1743, max=26271, avg=7580.34, stdev=2353.06 00:27:00.504 lat (usec): min=1745, max=26274, avg=7583.24, stdev=2353.41 00:27:00.504 clat percentiles (usec): 00:27:00.504 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5669], 00:27:00.504 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 7832], 00:27:00.504 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[11863], 00:27:00.504 | 99.00th=[15664], 99.50th=[16909], 99.90th=[18744], 99.95th=[19006], 00:27:00.504 | 99.99th=[19530] 00:27:00.504 bw ( KiB/s): min=76832, max=93216, per=49.53%, avg=81640.00, stdev=7746.48, samples=4 00:27:00.504 iops : min= 4802, max= 5826, avg=5102.50, stdev=484.16, samples=4 00:27:00.504 write: IOPS=6022, BW=94.1MiB/s (98.7MB/s)(167MiB/1773msec); 0 zone resets 00:27:00.504 slat (usec): min=29, max=285, avg=31.84, stdev= 5.59 00:27:00.504 clat (usec): min=1785, max=22756, avg=8471.99, stdev=1773.21 00:27:00.504 lat (usec): min=1814, max=22793, avg=8503.83, stdev=1774.98 00:27:00.504 clat percentiles (usec): 00:27:00.504 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:27:00.504 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:27:00.504 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11207], 00:27:00.504 | 99.00th=[15401], 99.50th=[16909], 99.90th=[19268], 99.95th=[19530], 00:27:00.504 | 99.99th=[22676] 00:27:00.504 bw ( KiB/s): min=80224, max=97280, per=88.36%, avg=85144.00, stdev=8115.79, samples=4 00:27:00.504 iops : min= 5014, max= 6080, avg=5321.50, stdev=507.24, samples=4 00:27:00.504 lat (msec) : 2=0.02%, 4=1.45%, 10=85.84%, 20=12.68%, 50=0.01% 00:27:00.504 cpu : usr=85.48%, sys=12.23%, ctx=17, majf=0, minf=3 00:27:00.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:00.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:00.504 issued rwts: total=20656,10678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.504 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:00.504 00:27:00.504 Run status group 0 (all jobs): 00:27:00.504 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=323MiB (338MB), run=2005-2005msec 00:27:00.504 WRITE: bw=94.1MiB/s (98.7MB/s), 94.1MiB/s-94.1MiB/s (98.7MB/s-98.7MB/s), io=167MiB (175MB), run=1773-1773msec 00:27:00.504 08:27:33 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.504 08:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.504 08:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:00.504 08:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.504 08:27:33 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:00.504 08:27:33 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:00.504 08:27:33 -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:00.504 08:27:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:00.504 08:27:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:00.504 08:27:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:00.505 08:27:33 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:00.505 08:27:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:00.505 08:27:33 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:27:00.505 08:27:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:27:00.505 08:27:33 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:27:00.505 08:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.505 08:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:03.796 Nvme0n1 00:27:03.796 08:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.796 08:27:36 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:03.796 08:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.796 08:27:36 -- common/autotest_common.sh@10 -- # set +x 00:27:06.328 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.328 08:27:39 -- host/fio.sh@51 -- # ls_guid=f027ac68-0810-40f2-a0db-e4ffe1bd1d78 00:27:06.328 08:27:39 -- host/fio.sh@52 -- # get_lvs_free_mb f027ac68-0810-40f2-a0db-e4ffe1bd1d78 00:27:06.328 08:27:39 -- common/autotest_common.sh@1341 -- # local lvs_uuid=f027ac68-0810-40f2-a0db-e4ffe1bd1d78 00:27:06.328 08:27:39 -- common/autotest_common.sh@1342 -- # local lvs_info 00:27:06.328 08:27:39 -- common/autotest_common.sh@1343 -- # local fc 00:27:06.328 08:27:39 -- common/autotest_common.sh@1344 -- # local cs 00:27:06.328 08:27:39 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:06.328 08:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.328 08:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.328 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.328 08:27:39 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:27:06.328 { 00:27:06.328 "uuid": "f027ac68-0810-40f2-a0db-e4ffe1bd1d78", 00:27:06.328 "name": "lvs_0", 00:27:06.328 "base_bdev": "Nvme0n1", 00:27:06.328 "total_data_clusters": 930, 00:27:06.328 "free_clusters": 930, 00:27:06.328 "block_size": 512, 00:27:06.328 "cluster_size": 1073741824 00:27:06.328 } 00:27:06.328 ]' 00:27:06.328 08:27:39 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="f027ac68-0810-40f2-a0db-e4ffe1bd1d78") .free_clusters' 00:27:06.328 08:27:39 -- common/autotest_common.sh@1346 -- # fc=930 00:27:06.328 08:27:39 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="f027ac68-0810-40f2-a0db-e4ffe1bd1d78") .cluster_size' 00:27:06.328 08:27:39 -- common/autotest_common.sh@1347 -- # cs=1073741824 00:27:06.328 08:27:39 -- common/autotest_common.sh@1350 -- # free_mb=952320 00:27:06.328 08:27:39 -- common/autotest_common.sh@1351 -- # echo 952320 00:27:06.328 952320 00:27:06.328 08:27:39 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:06.328 08:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.328 08:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.328 e12ff666-2b8d-41c5-86ca-2067216b3aa1 00:27:06.328 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.328 08:27:39 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:06.328 08:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.329 08:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.329 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.329 08:27:39 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:06.329 08:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.329 08:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.329 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.329 08:27:39 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:06.329 08:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.329 08:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:06.329 08:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.329 08:27:39 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.329 08:27:39 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.329 08:27:39 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:06.329 08:27:39 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.329 08:27:39 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:06.329 08:27:39 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.329 08:27:39 -- common/autotest_common.sh@1318 -- # shift 00:27:06.329 08:27:39 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:06.329 08:27:39 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:06.329 08:27:39 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:06.329 08:27:39 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:06.329 08:27:39 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:06.329 08:27:39 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:06.329 08:27:39 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:06.329 08:27:39 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:06.329 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:06.329 fio-3.35 00:27:06.329 Starting 1 thread 00:27:06.589 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.125 00:27:09.125 test: (groupid=0, jobs=1): err= 0: pid=2401442: Tue Feb 13 08:27:42 2024 00:27:09.125 read: IOPS=8439, BW=33.0MiB/s (34.6MB/s)(66.1MiB/2006msec) 00:27:09.125 slat (nsec): min=1537, max=89603, avg=1696.75, stdev=967.84 00:27:09.125 clat (usec): min=1032, max=171152, avg=8436.42, stdev=10161.45 00:27:09.125 lat (usec): min=1034, max=171171, avg=8438.11, stdev=10161.59 00:27:09.125 clat percentiles (msec): 00:27:09.125 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:27:09.125 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:27:09.125 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:27:09.125 | 99.00th=[ 12], 99.50th=[ 13], 99.90th=[ 171], 99.95th=[ 171], 00:27:09.125 | 99.99th=[ 171] 00:27:09.125 bw ( KiB/s): min=23656, max=37344, per=99.89%, avg=33718.00, stdev=6714.22, samples=4 00:27:09.125 iops : min= 5914, max= 9336, avg=8429.50, stdev=1678.55, samples=4 00:27:09.125 write: IOPS=8432, BW=32.9MiB/s (34.5MB/s)(66.1MiB/2006msec); 0 zone resets 00:27:09.125 slat (nsec): min=1596, max=84710, avg=1789.28, stdev=752.67 00:27:09.125 clat (usec): min=355, max=168836, avg=6609.61, stdev=9429.19 00:27:09.125 lat (usec): min=357, max=168862, avg=6611.40, stdev=9429.36 00:27:09.125 clat percentiles (msec): 00:27:09.126 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:27:09.126 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:27:09.126 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 8], 00:27:09.126 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 169], 99.95th=[ 169], 00:27:09.126 | 99.99th=[ 169] 00:27:09.126 bw ( KiB/s): min=24744, max=36872, per=99.96%, avg=33718.00, stdev=5983.79, samples=4 00:27:09.126 iops : min= 6186, max= 9218, avg=8429.50, stdev=1495.95, samples=4 00:27:09.126 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:09.126 lat (msec) : 2=0.04%, 4=0.64%, 10=97.49%, 20=1.43%, 250=0.38% 00:27:09.126 cpu : usr=63.79%, sys=30.42%, ctx=73, majf=0, minf=6 00:27:09.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:09.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.126 issued rwts: total=16929,16916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.126 00:27:09.126 Run status group 0 (all jobs): 00:27:09.126 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.1MiB (69.3MB), run=2006-2006msec 00:27:09.126 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.1MiB (69.3MB), run=2006-2006msec 00:27:09.126 08:27:42 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:09.126 08:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.126 08:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.126 08:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.126 08:27:42 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:09.126 08:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.126 08:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.694 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.694 08:27:43 -- host/fio.sh@62 -- # ls_nested_guid=1489854e-c133-4fe3-b62c-0e95594cdb1e 00:27:09.694 08:27:43 -- host/fio.sh@63 -- # get_lvs_free_mb 1489854e-c133-4fe3-b62c-0e95594cdb1e 00:27:09.694 08:27:43 -- common/autotest_common.sh@1341 -- # local lvs_uuid=1489854e-c133-4fe3-b62c-0e95594cdb1e 00:27:09.694 08:27:43 -- common/autotest_common.sh@1342 -- # local lvs_info 00:27:09.694 08:27:43 -- common/autotest_common.sh@1343 -- # local fc 00:27:09.694 08:27:43 -- common/autotest_common.sh@1344 -- # local cs 00:27:09.694 08:27:43 -- common/autotest_common.sh@1345 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:09.694 08:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.694 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:09.694 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.694 08:27:43 -- common/autotest_common.sh@1345 -- # lvs_info='[ 00:27:09.694 { 00:27:09.694 "uuid": "f027ac68-0810-40f2-a0db-e4ffe1bd1d78", 00:27:09.694 "name": "lvs_0", 00:27:09.694 "base_bdev": "Nvme0n1", 00:27:09.694 "total_data_clusters": 930, 00:27:09.694 "free_clusters": 0, 00:27:09.694 "block_size": 512, 00:27:09.694 "cluster_size": 1073741824 00:27:09.694 }, 00:27:09.694 { 00:27:09.694 "uuid": "1489854e-c133-4fe3-b62c-0e95594cdb1e", 00:27:09.694 "name": "lvs_n_0", 00:27:09.694 "base_bdev": "e12ff666-2b8d-41c5-86ca-2067216b3aa1", 00:27:09.694 "total_data_clusters": 237847, 00:27:09.694 "free_clusters": 237847, 00:27:09.694 "block_size": 512, 00:27:09.694 "cluster_size": 4194304 00:27:09.694 } 00:27:09.694 ]' 00:27:09.694 08:27:43 -- common/autotest_common.sh@1346 -- # jq '.[] | select(.uuid=="1489854e-c133-4fe3-b62c-0e95594cdb1e") .free_clusters' 00:27:09.954 08:27:43 -- common/autotest_common.sh@1346 -- # fc=237847 00:27:09.954 08:27:43 -- common/autotest_common.sh@1347 -- # jq '.[] | select(.uuid=="1489854e-c133-4fe3-b62c-0e95594cdb1e") .cluster_size' 00:27:09.954 08:27:43 -- common/autotest_common.sh@1347 -- # cs=4194304 00:27:09.954 08:27:43 -- common/autotest_common.sh@1350 -- # free_mb=951388 00:27:09.954 08:27:43 -- common/autotest_common.sh@1351 -- # echo 951388 00:27:09.954 951388 00:27:09.954 08:27:43 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:09.954 08:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.954 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:10.213 05243bd3-16f1-47ca-b65d-36eacf6cc214 00:27:10.213 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.213 08:27:43 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:10.213 08:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.213 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:10.213 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.213 08:27:43 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:10.213 08:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.213 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:10.213 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.213 08:27:43 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:10.213 08:27:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.213 08:27:43 -- common/autotest_common.sh@10 -- # set +x 00:27:10.213 08:27:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.213 08:27:43 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:10.213 08:27:43 -- common/autotest_common.sh@1337 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:10.213 08:27:43 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:27:10.213 08:27:43 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.213 08:27:43 -- common/autotest_common.sh@1316 -- # local sanitizers 00:27:10.213 08:27:43 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.213 08:27:43 -- common/autotest_common.sh@1318 -- # shift 00:27:10.213 08:27:43 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:27:10.213 08:27:43 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # grep libasan 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:10.213 08:27:43 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:10.213 08:27:43 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:27:10.213 08:27:43 -- common/autotest_common.sh@1322 -- # asan_lib= 00:27:10.213 08:27:43 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:27:10.213 08:27:43 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:10.214 08:27:43 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:10.473 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:10.473 fio-3.35 00:27:10.473 Starting 1 thread 00:27:10.732 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.269 00:27:13.269 test: (groupid=0, jobs=1): err= 0: pid=2402137: Tue Feb 13 08:27:46 2024 00:27:13.269 read: IOPS=8226, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2007msec) 00:27:13.269 slat (nsec): min=1539, max=105223, avg=1718.53, stdev=1189.30 00:27:13.269 clat (usec): min=2701, max=14210, avg=8643.27, stdev=785.84 00:27:13.269 lat (usec): min=2705, max=14212, avg=8644.99, stdev=785.78 00:27:13.269 clat percentiles (usec): 00:27:13.269 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:27:13.269 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:27:13.269 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:27:13.269 | 99.00th=[10552], 99.50th=[11731], 99.90th=[13304], 99.95th=[14091], 00:27:13.269 | 99.99th=[14222] 00:27:13.269 bw ( KiB/s): min=31984, max=33432, per=99.90%, avg=32874.00, stdev=639.35, samples=4 00:27:13.269 iops : min= 7996, max= 8358, avg=8218.50, stdev=159.84, samples=4 00:27:13.269 write: IOPS=8228, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2007msec); 0 zone resets 00:27:13.269 slat (nsec): min=1590, max=93729, avg=1811.42, stdev=843.72 00:27:13.269 clat (usec): min=1584, max=13213, avg=6839.23, stdev=670.47 00:27:13.269 lat (usec): min=1591, max=13214, avg=6841.04, stdev=670.44 00:27:13.269 clat percentiles (usec): 00:27:13.269 | 1.00th=[ 5145], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6325], 00:27:13.269 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:27:13.269 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7832], 00:27:13.269 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[11731], 99.95th=[11994], 00:27:13.269 | 99.99th=[12649] 00:27:13.269 bw ( KiB/s): min=32728, max=33168, per=100.00%, avg=32922.00, stdev=195.78, samples=4 00:27:13.269 iops : min= 8182, max= 8292, avg=8230.50, stdev=48.95, samples=4 00:27:13.269 lat (msec) : 2=0.01%, 4=0.08%, 10=98.40%, 20=1.52% 00:27:13.269 cpu : usr=62.66%, sys=32.45%, ctx=76, majf=0, minf=6 00:27:13.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:13.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.269 issued rwts: total=16511,16514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.269 00:27:13.269 Run status group 0 (all jobs): 00:27:13.269 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.6MB), run=2007-2007msec 00:27:13.269 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.6MB), run=2007-2007msec 00:27:13.269 08:27:46 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:13.269 08:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.269 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:13.269 08:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.269 08:27:46 -- host/fio.sh@72 -- # sync 00:27:13.269 08:27:46 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:13.269 08:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:13.269 08:27:46 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 08:27:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.589 08:27:50 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:16.589 08:27:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.589 08:27:50 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 08:27:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.589 08:27:50 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:16.589 08:27:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.589 08:27:50 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 08:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.126 08:27:52 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:19.126 08:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.126 08:27:52 -- common/autotest_common.sh@10 -- # set +x 00:27:19.126 08:27:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.126 08:27:52 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:19.126 08:27:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.126 08:27:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.037 08:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.037 08:27:54 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:21.037 08:27:54 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:21.037 08:27:54 -- host/fio.sh@84 -- # nvmftestfini 00:27:21.037 08:27:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:21.037 08:27:54 -- nvmf/common.sh@116 -- # sync 00:27:21.037 08:27:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:21.037 08:27:54 -- nvmf/common.sh@119 -- # set +e 00:27:21.037 08:27:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:21.037 08:27:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:21.037 rmmod nvme_tcp 00:27:21.037 rmmod nvme_fabrics 00:27:21.037 rmmod nvme_keyring 00:27:21.037 08:27:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:21.037 08:27:54 -- nvmf/common.sh@123 -- # set -e 00:27:21.037 08:27:54 -- nvmf/common.sh@124 -- # return 0 00:27:21.037 08:27:54 -- nvmf/common.sh@477 -- # '[' -n 2399002 ']' 00:27:21.037 08:27:54 -- nvmf/common.sh@478 -- # killprocess 2399002 00:27:21.037 08:27:54 -- common/autotest_common.sh@924 -- # '[' -z 2399002 ']' 00:27:21.037 08:27:54 -- common/autotest_common.sh@928 -- # kill -0 2399002 00:27:21.037 08:27:54 -- common/autotest_common.sh@929 -- # uname 00:27:21.037 08:27:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:21.037 08:27:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2399002 00:27:21.037 08:27:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:21.037 08:27:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:21.037 08:27:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2399002' 00:27:21.037 killing process with pid 2399002 00:27:21.037 08:27:54 -- common/autotest_common.sh@943 -- # kill 2399002 00:27:21.037 08:27:54 -- common/autotest_common.sh@948 -- # wait 2399002 00:27:21.037 08:27:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:21.037 08:27:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:21.037 08:27:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:21.037 08:27:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.037 08:27:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:21.037 08:27:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.037 08:27:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.037 08:27:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.578 08:27:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:23.578 00:27:23.578 real 0m35.343s 00:27:23.578 user 2m16.418s 00:27:23.578 sys 0m8.427s 00:27:23.578 08:27:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:23.578 08:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 ************************************ 00:27:23.578 END TEST nvmf_fio_host 00:27:23.578 ************************************ 00:27:23.578 08:27:56 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:23.578 08:27:56 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:27:23.578 08:27:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:23.578 08:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 ************************************ 00:27:23.578 START TEST nvmf_failover 00:27:23.578 ************************************ 00:27:23.578 08:27:56 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:23.578 * Looking for test storage... 00:27:23.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.578 08:27:56 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.578 08:27:56 -- nvmf/common.sh@7 -- # uname -s 00:27:23.578 08:27:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.578 08:27:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.579 08:27:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.579 08:27:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.579 08:27:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.579 08:27:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.579 08:27:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.579 08:27:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.579 08:27:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.579 08:27:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.579 08:27:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:23.579 08:27:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:23.579 08:27:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.579 08:27:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.579 08:27:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.579 08:27:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.579 08:27:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.579 08:27:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.579 08:27:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.579 08:27:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.579 08:27:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.579 08:27:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.579 08:27:56 -- paths/export.sh@5 -- # export PATH 00:27:23.579 08:27:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.579 08:27:56 -- nvmf/common.sh@46 -- # : 0 00:27:23.579 08:27:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:23.579 08:27:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:23.579 08:27:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:23.579 08:27:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.579 08:27:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.579 08:27:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:23.579 08:27:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:23.579 08:27:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:23.579 08:27:56 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:23.579 08:27:56 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:23.579 08:27:56 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:23.579 08:27:56 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:23.579 08:27:56 -- host/failover.sh@18 -- # nvmftestinit 00:27:23.579 08:27:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:23.579 08:27:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.579 08:27:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:23.579 08:27:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:23.579 08:27:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:23.579 08:27:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.579 08:27:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.579 08:27:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.579 08:27:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:23.579 08:27:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:23.579 08:27:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:23.579 08:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:30.149 08:28:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:30.149 08:28:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:30.149 08:28:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:30.149 08:28:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:30.149 08:28:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:30.149 08:28:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:30.149 08:28:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:30.149 08:28:02 -- nvmf/common.sh@294 -- # net_devs=() 00:27:30.149 08:28:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:30.149 08:28:02 -- nvmf/common.sh@295 -- # e810=() 00:27:30.149 08:28:02 -- nvmf/common.sh@295 -- # local -ga e810 00:27:30.149 08:28:02 -- nvmf/common.sh@296 -- # x722=() 00:27:30.149 08:28:02 -- nvmf/common.sh@296 -- # local -ga x722 00:27:30.149 08:28:02 -- nvmf/common.sh@297 -- # mlx=() 00:27:30.149 08:28:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:30.149 08:28:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.149 08:28:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:30.149 08:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:30.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:30.149 08:28:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:30.149 08:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:30.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:30.149 08:28:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:30.149 08:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.149 08:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.149 08:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:30.149 Found net devices under 0000:af:00.0: cvl_0_0 00:27:30.149 08:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:30.149 08:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.149 08:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.149 08:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:30.149 Found net devices under 0000:af:00.1: cvl_0_1 00:27:30.149 08:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:30.149 08:28:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:30.149 08:28:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.149 08:28:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.149 08:28:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:30.149 08:28:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.149 08:28:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.149 08:28:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:30.149 08:28:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.149 08:28:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.149 08:28:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:30.149 08:28:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:30.149 08:28:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.149 08:28:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.149 08:28:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.149 08:28:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.149 08:28:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:30.149 08:28:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.149 08:28:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.149 08:28:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.149 08:28:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:30.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:30.149 00:27:30.149 --- 10.0.0.2 ping statistics --- 00:27:30.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.149 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:30.149 08:28:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:27:30.149 00:27:30.149 --- 10.0.0.1 ping statistics --- 00:27:30.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.149 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:27:30.149 08:28:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.149 08:28:02 -- nvmf/common.sh@410 -- # return 0 00:27:30.149 08:28:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:30.149 08:28:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.149 08:28:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:30.149 08:28:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:30.150 08:28:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.150 08:28:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:30.150 08:28:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:30.150 08:28:02 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:30.150 08:28:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:30.150 08:28:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:30.150 08:28:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.150 08:28:02 -- nvmf/common.sh@469 -- # nvmfpid=2407553 00:27:30.150 08:28:02 -- nvmf/common.sh@470 -- # waitforlisten 2407553 00:27:30.150 08:28:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:30.150 08:28:02 -- common/autotest_common.sh@817 -- # '[' -z 2407553 ']' 00:27:30.150 08:28:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.150 08:28:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:30.150 08:28:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.150 08:28:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:30.150 08:28:02 -- common/autotest_common.sh@10 -- # set +x 00:27:30.150 [2024-02-13 08:28:02.922737] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:30.150 [2024-02-13 08:28:02.922782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.150 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.150 [2024-02-13 08:28:02.985851] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.150 [2024-02-13 08:28:03.060879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:30.150 [2024-02-13 08:28:03.060989] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.150 [2024-02-13 08:28:03.060999] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.150 [2024-02-13 08:28:03.061007] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.150 [2024-02-13 08:28:03.061106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.150 [2024-02-13 08:28:03.061192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.150 [2024-02-13 08:28:03.061193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.150 08:28:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:30.150 08:28:03 -- common/autotest_common.sh@850 -- # return 0 00:27:30.150 08:28:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:30.150 08:28:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:30.150 08:28:03 -- common/autotest_common.sh@10 -- # set +x 00:27:30.150 08:28:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.150 08:28:03 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:30.409 [2024-02-13 08:28:03.905256] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.409 08:28:03 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:30.669 Malloc0 00:27:30.669 08:28:04 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.669 08:28:04 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.929 08:28:04 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.215 [2024-02-13 08:28:04.678926] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.215 08:28:04 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:31.215 [2024-02-13 08:28:04.851426] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:31.215 08:28:04 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:31.475 [2024-02-13 08:28:05.019956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:31.475 08:28:05 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:31.475 08:28:05 -- host/failover.sh@31 -- # bdevperf_pid=2407931 00:27:31.475 08:28:05 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.475 08:28:05 -- host/failover.sh@34 -- # waitforlisten 2407931 /var/tmp/bdevperf.sock 00:27:31.475 08:28:05 -- common/autotest_common.sh@817 -- # '[' -z 2407931 ']' 00:27:31.475 08:28:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.475 08:28:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:31.475 08:28:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.475 08:28:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:31.475 08:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:32.415 08:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:32.415 08:28:05 -- common/autotest_common.sh@850 -- # return 0 00:27:32.415 08:28:05 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:32.674 NVMe0n1 00:27:32.674 08:28:06 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:32.934 00:27:32.934 08:28:06 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:32.934 08:28:06 -- host/failover.sh@39 -- # run_test_pid=2408164 00:27:32.934 08:28:06 -- host/failover.sh@41 -- # sleep 1 00:27:33.873 08:28:07 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.133 [2024-02-13 08:28:07.629777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.133 [2024-02-13 08:28:07.629822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.133 [2024-02-13 08:28:07.629830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.133 [2024-02-13 08:28:07.629836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.133 [2024-02-13 08:28:07.629843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.133 [2024-02-13 08:28:07.629849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.629995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 [2024-02-13 08:28:07.630122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24857e0 is same with the state(5) to be set 00:27:34.134 08:28:07 -- host/failover.sh@45 -- # sleep 3 00:27:37.431 08:28:10 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:37.431 00:27:37.431 08:28:10 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:37.431 [2024-02-13 08:28:11.106309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.431 [2024-02-13 08:28:11.106716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2485ff0 is same with the state(5) to be set 00:27:37.708 08:28:11 -- host/failover.sh@50 -- # sleep 3 00:27:41.014 08:28:14 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.014 [2024-02-13 08:28:14.308107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.014 08:28:14 -- host/failover.sh@55 -- # sleep 1 00:27:41.953 08:28:15 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:41.953 [2024-02-13 08:28:15.504584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.953 [2024-02-13 08:28:15.504628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.953 [2024-02-13 08:28:15.504636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.953 [2024-02-13 08:28:15.504642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.953 [2024-02-13 08:28:15.504655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 [2024-02-13 08:28:15.504956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230460 is same with the state(5) to be set 00:27:41.954 08:28:15 -- host/failover.sh@59 -- # wait 2408164 00:27:48.532 0 00:27:48.532 08:28:21 -- host/failover.sh@61 -- # killprocess 2407931 00:27:48.532 08:28:21 -- common/autotest_common.sh@924 -- # '[' -z 2407931 ']' 00:27:48.532 08:28:21 -- common/autotest_common.sh@928 -- # kill -0 2407931 00:27:48.532 08:28:21 -- common/autotest_common.sh@929 -- # uname 00:27:48.532 08:28:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:48.532 08:28:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2407931 00:27:48.532 08:28:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:48.532 08:28:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:48.532 08:28:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2407931' 00:27:48.532 killing process with pid 2407931 00:27:48.532 08:28:21 -- common/autotest_common.sh@943 -- # kill 2407931 00:27:48.532 08:28:21 -- common/autotest_common.sh@948 -- # wait 2407931 00:27:48.532 08:28:21 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:48.532 [2024-02-13 08:28:05.076194] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:48.532 [2024-02-13 08:28:05.076247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407931 ] 00:27:48.533 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.533 [2024-02-13 08:28:05.137700] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.533 [2024-02-13 08:28:05.209052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.533 Running I/O for 15 seconds... 00:27:48.533 [2024-02-13 08:28:07.630476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.533 [2024-02-13 08:28:07.630946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.533 [2024-02-13 08:28:07.630959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.533 [2024-02-13 08:28:07.630974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.630988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.630996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.631003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.631010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.631016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.631025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.631031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.631039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.533 [2024-02-13 08:28:07.631045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.533 [2024-02-13 08:28:07.631055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.534 [2024-02-13 08:28:07.631555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.534 [2024-02-13 08:28:07.631592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.534 [2024-02-13 08:28:07.631599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.631816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.631830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.631886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.631901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.631991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.631999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.632077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.632090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.535 [2024-02-13 08:28:07.632104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.535 [2024-02-13 08:28:07.632153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.535 [2024-02-13 08:28:07.632162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.536 [2024-02-13 08:28:07.632192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.536 [2024-02-13 08:28:07.632207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.536 [2024-02-13 08:28:07.632264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.536 [2024-02-13 08:28:07.632278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:07.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632388] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8e7c0 is same with the state(5) to be set 00:27:48.536 [2024-02-13 08:28:07.632396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.536 [2024-02-13 08:28:07.632401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.536 [2024-02-13 08:28:07.632409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18200 len:8 PRP1 0x0 PRP2 0x0 00:27:48.536 [2024-02-13 08:28:07.632415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632457] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe8e7c0 was disconnected and freed. reset controller. 00:27:48.536 [2024-02-13 08:28:07.632471] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:48.536 [2024-02-13 08:28:07.632492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.536 [2024-02-13 08:28:07.632499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.536 [2024-02-13 08:28:07.632513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.536 [2024-02-13 08:28:07.632526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.536 [2024-02-13 08:28:07.632539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:07.632545] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.536 [2024-02-13 08:28:07.634373] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.536 [2024-02-13 08:28:07.634397] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f6b0 (9): Bad file descriptor 00:27:48.536 [2024-02-13 08:28:07.660177] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:48.536 [2024-02-13 08:28:11.106871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.106991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.106999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.107014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.107028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.107043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.107057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.536 [2024-02-13 08:28:11.107071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.536 [2024-02-13 08:28:11.107078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.537 [2024-02-13 08:28:11.107458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.537 [2024-02-13 08:28:11.107472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.537 [2024-02-13 08:28:11.107488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.537 [2024-02-13 08:28:11.107503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.537 [2024-02-13 08:28:11.107511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.107962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.107992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.108006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.108014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.108020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.108028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.538 [2024-02-13 08:28:11.108034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.108043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.108049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.108057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.538 [2024-02-13 08:28:11.108063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.538 [2024-02-13 08:28:11.108071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.539 [2024-02-13 08:28:11.108604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.539 [2024-02-13 08:28:11.108618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.539 [2024-02-13 08:28:11.108626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.540 [2024-02-13 08:28:11.108632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.540 [2024-02-13 08:28:11.108663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:11.108762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108772] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90810 is same with the state(5) to be set 00:27:48.540 [2024-02-13 08:28:11.108780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.540 [2024-02-13 08:28:11.108785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.540 [2024-02-13 08:28:11.108792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116472 len:8 PRP1 0x0 PRP2 0x0 00:27:48.540 [2024-02-13 08:28:11.108798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108838] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe90810 was disconnected and freed. reset controller. 00:27:48.540 [2024-02-13 08:28:11.108847] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:48.540 [2024-02-13 08:28:11.108867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.540 [2024-02-13 08:28:11.108875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.540 [2024-02-13 08:28:11.108888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.540 [2024-02-13 08:28:11.108901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.540 [2024-02-13 08:28:11.108914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:11.108919] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.540 [2024-02-13 08:28:11.110796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.540 [2024-02-13 08:28:11.110820] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f6b0 (9): Bad file descriptor 00:27:48.540 [2024-02-13 08:28:11.141927] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:48.540 [2024-02-13 08:28:15.505125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.540 [2024-02-13 08:28:15.505452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.540 [2024-02-13 08:28:15.505459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.505986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.541 [2024-02-13 08:28:15.505993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.506000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.506007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.541 [2024-02-13 08:28:15.506015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.541 [2024-02-13 08:28:15.506021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.542 [2024-02-13 08:28:15.506400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.542 [2024-02-13 08:28:15.506487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.542 [2024-02-13 08:28:15.506495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.543 [2024-02-13 08:28:15.506932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.506991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.507004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.507012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.507018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.507026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.543 [2024-02-13 08:28:15.507033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.543 [2024-02-13 08:28:15.507040] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9dc80 is same with the state(5) to be set 00:27:48.543 [2024-02-13 08:28:15.507048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:48.543 [2024-02-13 08:28:15.507053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:48.543 [2024-02-13 08:28:15.507061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85416 len:8 PRP1 0x0 PRP2 0x0 00:27:48.544 [2024-02-13 08:28:15.507068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.544 [2024-02-13 08:28:15.507108] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe9dc80 was disconnected and freed. reset controller. 00:27:48.544 [2024-02-13 08:28:15.507120] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:48.544 [2024-02-13 08:28:15.507141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.544 [2024-02-13 08:28:15.507150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.544 [2024-02-13 08:28:15.507157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.544 [2024-02-13 08:28:15.507164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.544 [2024-02-13 08:28:15.507171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.544 [2024-02-13 08:28:15.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.544 [2024-02-13 08:28:15.507184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.544 [2024-02-13 08:28:15.507190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.544 [2024-02-13 08:28:15.507196] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:48.544 [2024-02-13 08:28:15.507216] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6f6b0 (9): Bad file descriptor 00:27:48.544 [2024-02-13 08:28:15.509090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:48.544 [2024-02-13 08:28:15.656736] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:48.544 00:27:48.544 Latency(us) 00:27:48.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.544 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:48.544 Verification LBA range: start 0x0 length 0x4000 00:27:48.544 NVMe0n1 : 15.00 17078.08 66.71 1002.67 0.00 7066.46 838.70 14293.09 00:27:48.544 =================================================================================================================== 00:27:48.544 Total : 17078.08 66.71 1002.67 0.00 7066.46 838.70 14293.09 00:27:48.544 Received shutdown signal, test time was about 15.000000 seconds 00:27:48.544 00:27:48.544 Latency(us) 00:27:48.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.544 =================================================================================================================== 00:27:48.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.544 08:28:21 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:48.544 08:28:21 -- host/failover.sh@65 -- # count=3 00:27:48.544 08:28:21 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:48.544 08:28:21 -- host/failover.sh@73 -- # bdevperf_pid=2410699 00:27:48.544 08:28:21 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:48.544 08:28:21 -- host/failover.sh@75 -- # waitforlisten 2410699 /var/tmp/bdevperf.sock 00:27:48.544 08:28:21 -- common/autotest_common.sh@817 -- # '[' -z 2410699 ']' 00:27:48.544 08:28:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.544 08:28:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:48.544 08:28:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.544 08:28:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:48.544 08:28:21 -- common/autotest_common.sh@10 -- # set +x 00:27:49.113 08:28:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:49.113 08:28:22 -- common/autotest_common.sh@850 -- # return 0 00:27:49.113 08:28:22 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:49.372 [2024-02-13 08:28:22.873403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:49.372 08:28:22 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:49.372 [2024-02-13 08:28:23.049915] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:49.632 08:28:23 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.891 NVMe0n1 00:27:49.891 08:28:23 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.891 00:27:50.150 08:28:23 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.410 00:27:50.410 08:28:23 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:50.410 08:28:23 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:50.669 08:28:24 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.669 08:28:24 -- host/failover.sh@87 -- # sleep 3 00:27:53.960 08:28:27 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.960 08:28:27 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:53.960 08:28:27 -- host/failover.sh@90 -- # run_test_pid=2411630 00:27:53.960 08:28:27 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:53.960 08:28:27 -- host/failover.sh@92 -- # wait 2411630 00:27:55.339 0 00:27:55.339 08:28:28 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:55.339 [2024-02-13 08:28:21.890696] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:55.339 [2024-02-13 08:28:21.890747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410699 ] 00:27:55.339 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.339 [2024-02-13 08:28:21.951537] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.339 [2024-02-13 08:28:22.021282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.339 [2024-02-13 08:28:24.302716] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:55.339 [2024-02-13 08:28:24.302766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.339 [2024-02-13 08:28:24.302777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.339 [2024-02-13 08:28:24.302786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.339 [2024-02-13 08:28:24.302793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.339 [2024-02-13 08:28:24.302800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.340 [2024-02-13 08:28:24.302806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.340 [2024-02-13 08:28:24.302813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.340 [2024-02-13 08:28:24.302820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.340 [2024-02-13 08:28:24.302827] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:55.340 [2024-02-13 08:28:24.302848] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:55.340 [2024-02-13 08:28:24.302861] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18726b0 (9): Bad file descriptor 00:27:55.340 [2024-02-13 08:28:24.394845] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:55.340 Running I/O for 1 seconds... 00:27:55.340 00:27:55.340 Latency(us) 00:27:55.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.340 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:55.340 Verification LBA range: start 0x0 length 0x4000 00:27:55.340 NVMe0n1 : 1.00 17109.81 66.84 0.00 0.00 7451.92 1170.29 16103.13 00:27:55.340 =================================================================================================================== 00:27:55.340 Total : 17109.81 66.84 0.00 0.00 7451.92 1170.29 16103.13 00:27:55.340 08:28:28 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.340 08:28:28 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:55.340 08:28:28 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.340 08:28:28 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.340 08:28:28 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:55.599 08:28:29 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:55.858 08:28:29 -- host/failover.sh@101 -- # sleep 3 00:27:59.150 08:28:32 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.150 08:28:32 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:59.150 08:28:32 -- host/failover.sh@108 -- # killprocess 2410699 00:27:59.151 08:28:32 -- common/autotest_common.sh@924 -- # '[' -z 2410699 ']' 00:27:59.151 08:28:32 -- common/autotest_common.sh@928 -- # kill -0 2410699 00:27:59.151 08:28:32 -- common/autotest_common.sh@929 -- # uname 00:27:59.151 08:28:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:59.151 08:28:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2410699 00:27:59.151 08:28:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:59.151 08:28:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:59.151 08:28:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2410699' 00:27:59.151 killing process with pid 2410699 00:27:59.151 08:28:32 -- common/autotest_common.sh@943 -- # kill 2410699 00:27:59.151 08:28:32 -- common/autotest_common.sh@948 -- # wait 2410699 00:27:59.151 08:28:32 -- host/failover.sh@110 -- # sync 00:27:59.151 08:28:32 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.410 08:28:32 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:59.410 08:28:32 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.410 08:28:32 -- host/failover.sh@116 -- # nvmftestfini 00:27:59.410 08:28:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:59.410 08:28:32 -- nvmf/common.sh@116 -- # sync 00:27:59.410 08:28:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:59.410 08:28:32 -- nvmf/common.sh@119 -- # set +e 00:27:59.410 08:28:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:59.410 08:28:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:59.410 rmmod nvme_tcp 00:27:59.410 rmmod nvme_fabrics 00:27:59.410 rmmod nvme_keyring 00:27:59.410 08:28:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:59.410 08:28:33 -- nvmf/common.sh@123 -- # set -e 00:27:59.410 08:28:33 -- nvmf/common.sh@124 -- # return 0 00:27:59.410 08:28:33 -- nvmf/common.sh@477 -- # '[' -n 2407553 ']' 00:27:59.410 08:28:33 -- nvmf/common.sh@478 -- # killprocess 2407553 00:27:59.410 08:28:33 -- common/autotest_common.sh@924 -- # '[' -z 2407553 ']' 00:27:59.410 08:28:33 -- common/autotest_common.sh@928 -- # kill -0 2407553 00:27:59.410 08:28:33 -- common/autotest_common.sh@929 -- # uname 00:27:59.410 08:28:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:59.410 08:28:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2407553 00:27:59.410 08:28:33 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:27:59.410 08:28:33 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:27:59.410 08:28:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2407553' 00:27:59.410 killing process with pid 2407553 00:27:59.410 08:28:33 -- common/autotest_common.sh@943 -- # kill 2407553 00:27:59.410 08:28:33 -- common/autotest_common.sh@948 -- # wait 2407553 00:27:59.670 08:28:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:59.670 08:28:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:59.670 08:28:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:59.670 08:28:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.670 08:28:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:59.670 08:28:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.670 08:28:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.670 08:28:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.287 08:28:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:02.287 00:28:02.287 real 0m38.527s 00:28:02.287 user 2m2.298s 00:28:02.287 sys 0m7.988s 00:28:02.287 08:28:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:02.287 08:28:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 ************************************ 00:28:02.287 END TEST nvmf_failover 00:28:02.287 ************************************ 00:28:02.287 08:28:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:02.287 08:28:35 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:02.287 08:28:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:02.287 08:28:35 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 ************************************ 00:28:02.287 START TEST nvmf_discovery 00:28:02.287 ************************************ 00:28:02.287 08:28:35 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:02.287 * Looking for test storage... 00:28:02.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.287 08:28:35 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.287 08:28:35 -- nvmf/common.sh@7 -- # uname -s 00:28:02.287 08:28:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.287 08:28:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.287 08:28:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.287 08:28:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.287 08:28:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.287 08:28:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.287 08:28:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.287 08:28:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.287 08:28:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.287 08:28:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.287 08:28:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:02.287 08:28:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:02.287 08:28:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.287 08:28:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.287 08:28:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.287 08:28:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.287 08:28:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.287 08:28:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.287 08:28:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.287 08:28:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.287 08:28:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.287 08:28:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.287 08:28:35 -- paths/export.sh@5 -- # export PATH 00:28:02.287 08:28:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.287 08:28:35 -- nvmf/common.sh@46 -- # : 0 00:28:02.287 08:28:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:02.287 08:28:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:02.287 08:28:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:02.287 08:28:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.287 08:28:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.287 08:28:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:02.287 08:28:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:02.287 08:28:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:02.287 08:28:35 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:02.287 08:28:35 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:02.287 08:28:35 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:02.287 08:28:35 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:02.287 08:28:35 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:02.287 08:28:35 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:02.287 08:28:35 -- host/discovery.sh@25 -- # nvmftestinit 00:28:02.287 08:28:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:02.287 08:28:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.287 08:28:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:02.287 08:28:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:02.287 08:28:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:02.287 08:28:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.287 08:28:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.287 08:28:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.287 08:28:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:02.287 08:28:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:02.287 08:28:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:02.287 08:28:35 -- common/autotest_common.sh@10 -- # set +x 00:28:07.560 08:28:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:07.560 08:28:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:07.560 08:28:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:07.560 08:28:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:07.560 08:28:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:07.560 08:28:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:07.560 08:28:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:07.561 08:28:40 -- nvmf/common.sh@294 -- # net_devs=() 00:28:07.561 08:28:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:07.561 08:28:40 -- nvmf/common.sh@295 -- # e810=() 00:28:07.561 08:28:40 -- nvmf/common.sh@295 -- # local -ga e810 00:28:07.561 08:28:40 -- nvmf/common.sh@296 -- # x722=() 00:28:07.561 08:28:40 -- nvmf/common.sh@296 -- # local -ga x722 00:28:07.561 08:28:40 -- nvmf/common.sh@297 -- # mlx=() 00:28:07.561 08:28:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:07.561 08:28:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.561 08:28:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:07.561 08:28:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:07.561 08:28:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:07.561 08:28:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:07.561 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:07.561 08:28:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:07.561 08:28:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:07.561 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:07.561 08:28:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:07.561 08:28:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.561 08:28:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.561 08:28:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:07.561 Found net devices under 0000:af:00.0: cvl_0_0 00:28:07.561 08:28:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.561 08:28:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:07.561 08:28:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.561 08:28:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.561 08:28:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:07.561 Found net devices under 0000:af:00.1: cvl_0_1 00:28:07.561 08:28:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.561 08:28:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:07.561 08:28:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:07.561 08:28:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:07.561 08:28:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.561 08:28:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.561 08:28:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.561 08:28:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:07.561 08:28:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.561 08:28:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.561 08:28:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:07.561 08:28:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.561 08:28:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.561 08:28:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:07.561 08:28:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:07.561 08:28:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.561 08:28:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.561 08:28:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.561 08:28:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.561 08:28:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:07.561 08:28:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.561 08:28:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.561 08:28:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.561 08:28:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:07.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:28:07.561 00:28:07.561 --- 10.0.0.2 ping statistics --- 00:28:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.561 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:28:07.561 08:28:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:28:07.561 00:28:07.561 --- 10.0.0.1 ping statistics --- 00:28:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.561 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:07.561 08:28:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.561 08:28:41 -- nvmf/common.sh@410 -- # return 0 00:28:07.561 08:28:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:07.561 08:28:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.561 08:28:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:07.561 08:28:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:07.561 08:28:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.561 08:28:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:07.561 08:28:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:07.561 08:28:41 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:07.561 08:28:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:07.561 08:28:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:07.561 08:28:41 -- common/autotest_common.sh@10 -- # set +x 00:28:07.561 08:28:41 -- nvmf/common.sh@469 -- # nvmfpid=2416263 00:28:07.561 08:28:41 -- nvmf/common.sh@470 -- # waitforlisten 2416263 00:28:07.561 08:28:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:07.561 08:28:41 -- common/autotest_common.sh@817 -- # '[' -z 2416263 ']' 00:28:07.561 08:28:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.561 08:28:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:07.561 08:28:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.561 08:28:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:07.561 08:28:41 -- common/autotest_common.sh@10 -- # set +x 00:28:07.561 [2024-02-13 08:28:41.191194] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:07.561 [2024-02-13 08:28:41.191238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.561 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.821 [2024-02-13 08:28:41.254070] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.821 [2024-02-13 08:28:41.329599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:07.821 [2024-02-13 08:28:41.329706] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.821 [2024-02-13 08:28:41.329714] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.821 [2024-02-13 08:28:41.329721] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.821 [2024-02-13 08:28:41.329741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.388 08:28:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:08.389 08:28:41 -- common/autotest_common.sh@850 -- # return 0 00:28:08.389 08:28:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:08.389 08:28:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:08.389 08:28:41 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 08:28:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.389 08:28:42 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.389 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 [2024-02-13 08:28:42.019572] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.389 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.389 08:28:42 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:08.389 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 [2024-02-13 08:28:42.031713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:08.389 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.389 08:28:42 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:08.389 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 null0 00:28:08.389 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.389 08:28:42 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:08.389 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 null1 00:28:08.389 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.389 08:28:42 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:08.389 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.389 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.389 08:28:42 -- host/discovery.sh@45 -- # hostpid=2416372 00:28:08.389 08:28:42 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:08.389 08:28:42 -- host/discovery.sh@46 -- # waitforlisten 2416372 /tmp/host.sock 00:28:08.389 08:28:42 -- common/autotest_common.sh@817 -- # '[' -z 2416372 ']' 00:28:08.389 08:28:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:08.389 08:28:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:08.389 08:28:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:08.389 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:08.389 08:28:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:08.389 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:08.648 [2024-02-13 08:28:42.103883] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:08.649 [2024-02-13 08:28:42.103923] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416372 ] 00:28:08.649 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.649 [2024-02-13 08:28:42.163404] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.649 [2024-02-13 08:28:42.233032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:08.649 [2024-02-13 08:28:42.233149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.216 08:28:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:09.216 08:28:42 -- common/autotest_common.sh@850 -- # return 0 00:28:09.216 08:28:42 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.216 08:28:42 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:09.216 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.216 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:42 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:09.475 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:42 -- host/discovery.sh@72 -- # notify_id=0 00:28:09.475 08:28:42 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:09.475 08:28:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.475 08:28:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.475 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:42 -- host/discovery.sh@59 -- # sort 00:28:09.475 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:42 -- host/discovery.sh@59 -- # xargs 00:28:09.475 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:42 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:09.475 08:28:42 -- host/discovery.sh@79 -- # get_bdev_list 00:28:09.475 08:28:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.475 08:28:42 -- host/discovery.sh@55 -- # xargs 00:28:09.475 08:28:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.475 08:28:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:42 -- host/discovery.sh@55 -- # sort 00:28:09.475 08:28:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:43 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:09.475 08:28:43 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.475 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:43 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.475 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # sort 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # xargs 00:28:09.475 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:43 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:09.475 08:28:43 -- host/discovery.sh@83 -- # get_bdev_list 00:28:09.475 08:28:43 -- host/discovery.sh@55 -- # sort 00:28:09.475 08:28:43 -- host/discovery.sh@55 -- # xargs 00:28:09.475 08:28:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.475 08:28:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.475 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:43 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:09.475 08:28:43 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:09.475 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.475 08:28:43 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.475 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # sort 00:28:09.475 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.475 08:28:43 -- host/discovery.sh@59 -- # xargs 00:28:09.475 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:09.735 08:28:43 -- host/discovery.sh@87 -- # get_bdev_list 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # sort 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # xargs 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:09.735 08:28:43 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 [2024-02-13 08:28:43.234895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:09.735 08:28:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:09.735 08:28:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- host/discovery.sh@59 -- # sort 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 08:28:43 -- host/discovery.sh@59 -- # xargs 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:09.735 08:28:43 -- host/discovery.sh@93 -- # get_bdev_list 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # sort 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 08:28:43 -- host/discovery.sh@55 -- # xargs 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:09.735 08:28:43 -- host/discovery.sh@94 -- # get_notification_count 00:28:09.735 08:28:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 08:28:43 -- host/discovery.sh@74 -- # jq '. | length' 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@74 -- # notification_count=0 00:28:09.735 08:28:43 -- host/discovery.sh@75 -- # notify_id=0 00:28:09.735 08:28:43 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:09.735 08:28:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.735 08:28:43 -- common/autotest_common.sh@10 -- # set +x 00:28:09.735 08:28:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.735 08:28:43 -- host/discovery.sh@100 -- # sleep 1 00:28:10.304 [2024-02-13 08:28:43.958316] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:10.304 [2024-02-13 08:28:43.958335] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:10.304 [2024-02-13 08:28:43.958348] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:10.564 [2024-02-13 08:28:44.086830] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:10.564 [2024-02-13 08:28:44.189848] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:10.564 [2024-02-13 08:28:44.189867] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:10.824 08:28:44 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:10.824 08:28:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:10.824 08:28:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:10.824 08:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.824 08:28:44 -- host/discovery.sh@59 -- # sort 00:28:10.824 08:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.824 08:28:44 -- host/discovery.sh@59 -- # xargs 00:28:10.824 08:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.824 08:28:44 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.824 08:28:44 -- host/discovery.sh@102 -- # get_bdev_list 00:28:10.824 08:28:44 -- host/discovery.sh@55 -- # sort 00:28:10.824 08:28:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.824 08:28:44 -- host/discovery.sh@55 -- # xargs 00:28:10.824 08:28:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:10.824 08:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.824 08:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.824 08:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.824 08:28:44 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:10.824 08:28:44 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:10.824 08:28:44 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:10.824 08:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.824 08:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.824 08:28:44 -- host/discovery.sh@63 -- # xargs 00:28:10.824 08:28:44 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:10.824 08:28:44 -- host/discovery.sh@63 -- # sort -n 00:28:10.824 08:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.084 08:28:44 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:11.084 08:28:44 -- host/discovery.sh@104 -- # get_notification_count 00:28:11.084 08:28:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:11.084 08:28:44 -- host/discovery.sh@74 -- # jq '. | length' 00:28:11.084 08:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.084 08:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:11.084 08:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.084 08:28:44 -- host/discovery.sh@74 -- # notification_count=1 00:28:11.084 08:28:44 -- host/discovery.sh@75 -- # notify_id=1 00:28:11.084 08:28:44 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:11.084 08:28:44 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:11.084 08:28:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.084 08:28:44 -- common/autotest_common.sh@10 -- # set +x 00:28:11.084 08:28:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.084 08:28:44 -- host/discovery.sh@109 -- # sleep 1 00:28:12.022 08:28:45 -- host/discovery.sh@110 -- # get_bdev_list 00:28:12.022 08:28:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.023 08:28:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.023 08:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.023 08:28:45 -- host/discovery.sh@55 -- # sort 00:28:12.023 08:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.023 08:28:45 -- host/discovery.sh@55 -- # xargs 00:28:12.023 08:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.023 08:28:45 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:12.023 08:28:45 -- host/discovery.sh@111 -- # get_notification_count 00:28:12.023 08:28:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:12.023 08:28:45 -- host/discovery.sh@74 -- # jq '. | length' 00:28:12.023 08:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.023 08:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.023 08:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.023 08:28:45 -- host/discovery.sh@74 -- # notification_count=1 00:28:12.023 08:28:45 -- host/discovery.sh@75 -- # notify_id=2 00:28:12.023 08:28:45 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:12.023 08:28:45 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:12.023 08:28:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.023 08:28:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.023 [2024-02-13 08:28:45.701640] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:12.023 [2024-02-13 08:28:45.702844] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:12.023 [2024-02-13 08:28:45.702869] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:12.023 08:28:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.023 08:28:45 -- host/discovery.sh@117 -- # sleep 1 00:28:12.282 [2024-02-13 08:28:45.833244] bdev_nvme.c:6628:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:12.282 [2024-02-13 08:28:45.894001] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:12.282 [2024-02-13 08:28:45.894017] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:12.282 [2024-02-13 08:28:45.894022] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:13.220 08:28:46 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:13.220 08:28:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:13.220 08:28:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:13.220 08:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.220 08:28:46 -- host/discovery.sh@59 -- # sort 00:28:13.220 08:28:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.220 08:28:46 -- host/discovery.sh@59 -- # xargs 00:28:13.220 08:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@119 -- # get_bdev_list 00:28:13.220 08:28:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.220 08:28:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.220 08:28:46 -- host/discovery.sh@55 -- # sort 00:28:13.220 08:28:46 -- host/discovery.sh@55 -- # xargs 00:28:13.220 08:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.220 08:28:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.220 08:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:13.220 08:28:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:13.220 08:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.220 08:28:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.220 08:28:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:13.220 08:28:46 -- host/discovery.sh@63 -- # sort -n 00:28:13.220 08:28:46 -- host/discovery.sh@63 -- # xargs 00:28:13.220 08:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:13.220 08:28:46 -- host/discovery.sh@121 -- # get_notification_count 00:28:13.221 08:28:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:13.221 08:28:46 -- host/discovery.sh@74 -- # jq '. | length' 00:28:13.221 08:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.221 08:28:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.221 08:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 08:28:46 -- host/discovery.sh@74 -- # notification_count=0 00:28:13.481 08:28:46 -- host/discovery.sh@75 -- # notify_id=2 00:28:13.481 08:28:46 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:13.481 08:28:46 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.481 08:28:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.481 08:28:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.481 [2024-02-13 08:28:46.913797] bdev_nvme.c:6686:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:13.481 [2024-02-13 08:28:46.913817] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:13.481 [2024-02-13 08:28:46.914424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.481 [2024-02-13 08:28:46.914440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.481 [2024-02-13 08:28:46.914448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.481 [2024-02-13 08:28:46.914461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.481 [2024-02-13 08:28:46.914468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.481 [2024-02-13 08:28:46.914474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.481 [2024-02-13 08:28:46.914481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.481 [2024-02-13 08:28:46.914487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.481 [2024-02-13 08:28:46.914494] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 08:28:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 08:28:46 -- host/discovery.sh@127 -- # sleep 1 00:28:13.481 [2024-02-13 08:28:46.924434] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.934473] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.481 [2024-02-13 08:28:46.934844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.935086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.935098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.481 [2024-02-13 08:28:46.935106] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 [2024-02-13 08:28:46.935118] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.935135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.481 [2024-02-13 08:28:46.935142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.481 [2024-02-13 08:28:46.935150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.481 [2024-02-13 08:28:46.935161] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.481 [2024-02-13 08:28:46.944526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.481 [2024-02-13 08:28:46.944872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.945098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.945109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.481 [2024-02-13 08:28:46.945116] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 [2024-02-13 08:28:46.945126] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.945142] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.481 [2024-02-13 08:28:46.945148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.481 [2024-02-13 08:28:46.945155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.481 [2024-02-13 08:28:46.945165] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.481 [2024-02-13 08:28:46.954575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.481 [2024-02-13 08:28:46.954815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.955095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.955107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.481 [2024-02-13 08:28:46.955114] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 [2024-02-13 08:28:46.955125] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.955134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.481 [2024-02-13 08:28:46.955140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.481 [2024-02-13 08:28:46.955146] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.481 [2024-02-13 08:28:46.955156] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.481 [2024-02-13 08:28:46.964625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.481 [2024-02-13 08:28:46.964904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.965206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.965217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.481 [2024-02-13 08:28:46.965224] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 [2024-02-13 08:28:46.965235] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.965257] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.481 [2024-02-13 08:28:46.965264] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.481 [2024-02-13 08:28:46.965271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.481 [2024-02-13 08:28:46.965281] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.481 [2024-02-13 08:28:46.974677] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.481 [2024-02-13 08:28:46.974988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.975218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.481 [2024-02-13 08:28:46.975229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.481 [2024-02-13 08:28:46.975236] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.481 [2024-02-13 08:28:46.975247] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.481 [2024-02-13 08:28:46.975256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.481 [2024-02-13 08:28:46.975262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.482 [2024-02-13 08:28:46.975269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.482 [2024-02-13 08:28:46.975278] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.482 [2024-02-13 08:28:46.984725] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.482 [2024-02-13 08:28:46.985023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.482 [2024-02-13 08:28:46.985315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.482 [2024-02-13 08:28:46.985330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.482 [2024-02-13 08:28:46.985337] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.482 [2024-02-13 08:28:46.985348] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.482 [2024-02-13 08:28:46.985370] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.482 [2024-02-13 08:28:46.985378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.482 [2024-02-13 08:28:46.985386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.482 [2024-02-13 08:28:46.985396] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.482 [2024-02-13 08:28:46.994774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.482 [2024-02-13 08:28:46.995031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.482 [2024-02-13 08:28:46.995348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.482 [2024-02-13 08:28:46.995360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f6b50 with addr=10.0.0.2, port=4420 00:28:13.482 [2024-02-13 08:28:46.995366] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f6b50 is same with the state(5) to be set 00:28:13.482 [2024-02-13 08:28:46.995377] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f6b50 (9): Bad file descriptor 00:28:13.482 [2024-02-13 08:28:46.995386] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:13.482 [2024-02-13 08:28:46.995392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:13.482 [2024-02-13 08:28:46.995398] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:13.482 [2024-02-13 08:28:46.995414] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.482 [2024-02-13 08:28:47.000418] bdev_nvme.c:6491:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:13.482 [2024-02-13 08:28:47.000433] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:14.420 08:28:47 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:14.420 08:28:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:14.420 08:28:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:14.420 08:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.420 08:28:47 -- host/discovery.sh@59 -- # sort 00:28:14.420 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.420 08:28:47 -- host/discovery.sh@59 -- # xargs 00:28:14.420 08:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.420 08:28:47 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.420 08:28:47 -- host/discovery.sh@129 -- # get_bdev_list 00:28:14.420 08:28:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.420 08:28:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.420 08:28:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.420 08:28:47 -- host/discovery.sh@55 -- # sort 00:28:14.420 08:28:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.420 08:28:47 -- host/discovery.sh@55 -- # xargs 00:28:14.420 08:28:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.420 08:28:48 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:14.420 08:28:48 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:14.420 08:28:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:14.420 08:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.420 08:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.420 08:28:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:14.420 08:28:48 -- host/discovery.sh@63 -- # sort -n 00:28:14.420 08:28:48 -- host/discovery.sh@63 -- # xargs 00:28:14.420 08:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.420 08:28:48 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:14.420 08:28:48 -- host/discovery.sh@131 -- # get_notification_count 00:28:14.420 08:28:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:14.420 08:28:48 -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.420 08:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.420 08:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.420 08:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.421 08:28:48 -- host/discovery.sh@74 -- # notification_count=0 00:28:14.421 08:28:48 -- host/discovery.sh@75 -- # notify_id=2 00:28:14.421 08:28:48 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:14.421 08:28:48 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:14.421 08:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.421 08:28:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 08:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.680 08:28:48 -- host/discovery.sh@135 -- # sleep 1 00:28:15.618 08:28:49 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:15.618 08:28:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.618 08:28:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.618 08:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.618 08:28:49 -- host/discovery.sh@59 -- # sort 00:28:15.618 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 08:28:49 -- host/discovery.sh@59 -- # xargs 00:28:15.618 08:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.618 08:28:49 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:15.618 08:28:49 -- host/discovery.sh@137 -- # get_bdev_list 00:28:15.618 08:28:49 -- host/discovery.sh@55 -- # sort 00:28:15.618 08:28:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.618 08:28:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.618 08:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.618 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 08:28:49 -- host/discovery.sh@55 -- # xargs 00:28:15.618 08:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.618 08:28:49 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:15.618 08:28:49 -- host/discovery.sh@138 -- # get_notification_count 00:28:15.618 08:28:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.618 08:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.618 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 08:28:49 -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.618 08:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.618 08:28:49 -- host/discovery.sh@74 -- # notification_count=2 00:28:15.618 08:28:49 -- host/discovery.sh@75 -- # notify_id=4 00:28:15.618 08:28:49 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:15.618 08:28:49 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:15.618 08:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.618 08:28:49 -- common/autotest_common.sh@10 -- # set +x 00:28:16.999 [2024-02-13 08:28:50.284102] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:16.999 [2024-02-13 08:28:50.284120] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:17.000 [2024-02-13 08:28:50.284135] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:17.000 [2024-02-13 08:28:50.372398] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:17.000 [2024-02-13 08:28:50.479126] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:17.000 [2024-02-13 08:28:50.479154] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.000 08:28:50 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.000 08:28:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.000 08:28:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 request: 00:28:17.000 { 00:28:17.000 "name": "nvme", 00:28:17.000 "trtype": "tcp", 00:28:17.000 "traddr": "10.0.0.2", 00:28:17.000 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.000 "adrfam": "ipv4", 00:28:17.000 "trsvcid": "8009", 00:28:17.000 "wait_for_attach": true, 00:28:17.000 "method": "bdev_nvme_start_discovery", 00:28:17.000 "req_id": 1 00:28:17.000 } 00:28:17.000 Got JSON-RPC error response 00:28:17.000 response: 00:28:17.000 { 00:28:17.000 "code": -17, 00:28:17.000 "message": "File exists" 00:28:17.000 } 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:17.000 08:28:50 -- common/autotest_common.sh@641 -- # es=1 00:28:17.000 08:28:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:17.000 08:28:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:17.000 08:28:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:17.000 08:28:50 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # sort 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # xargs 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.000 08:28:50 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:17.000 08:28:50 -- host/discovery.sh@147 -- # get_bdev_list 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # xargs 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # sort 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.000 08:28:50 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.000 08:28:50 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.000 08:28:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.000 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.000 08:28:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 request: 00:28:17.000 { 00:28:17.000 "name": "nvme_second", 00:28:17.000 "trtype": "tcp", 00:28:17.000 "traddr": "10.0.0.2", 00:28:17.000 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.000 "adrfam": "ipv4", 00:28:17.000 "trsvcid": "8009", 00:28:17.000 "wait_for_attach": true, 00:28:17.000 "method": "bdev_nvme_start_discovery", 00:28:17.000 "req_id": 1 00:28:17.000 } 00:28:17.000 Got JSON-RPC error response 00:28:17.000 response: 00:28:17.000 { 00:28:17.000 "code": -17, 00:28:17.000 "message": "File exists" 00:28:17.000 } 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:17.000 08:28:50 -- common/autotest_common.sh@641 -- # es=1 00:28:17.000 08:28:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:17.000 08:28:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:17.000 08:28:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:17.000 08:28:50 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # sort 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 08:28:50 -- host/discovery.sh@67 -- # xargs 00:28:17.000 08:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.000 08:28:50 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:17.000 08:28:50 -- host/discovery.sh@153 -- # get_bdev_list 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.000 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # sort 00:28:17.000 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:17.000 08:28:50 -- host/discovery.sh@55 -- # xargs 00:28:17.259 08:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.259 08:28:50 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.259 08:28:50 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.259 08:28:50 -- common/autotest_common.sh@638 -- # local es=0 00:28:17.259 08:28:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.259 08:28:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:17.260 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.260 08:28:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:17.260 08:28:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:17.260 08:28:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.260 08:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.260 08:28:50 -- common/autotest_common.sh@10 -- # set +x 00:28:18.197 [2024-02-13 08:28:51.711141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.197 [2024-02-13 08:28:51.711476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.197 [2024-02-13 08:28:51.711489] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2230ee0 with addr=10.0.0.2, port=8010 00:28:18.197 [2024-02-13 08:28:51.711501] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:18.197 [2024-02-13 08:28:51.711508] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:18.197 [2024-02-13 08:28:51.711516] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:19.136 [2024-02-13 08:28:52.713600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-02-13 08:28:52.713963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.136 [2024-02-13 08:28:52.713977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22264a0 with addr=10.0.0.2, port=8010 00:28:19.136 [2024-02-13 08:28:52.713987] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:19.136 [2024-02-13 08:28:52.713994] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:19.136 [2024-02-13 08:28:52.714000] bdev_nvme.c:6766:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:20.109 [2024-02-13 08:28:53.715616] bdev_nvme.c:6747:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:20.109 request: 00:28:20.109 { 00:28:20.109 "name": "nvme_second", 00:28:20.109 "trtype": "tcp", 00:28:20.109 "traddr": "10.0.0.2", 00:28:20.109 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:20.109 "adrfam": "ipv4", 00:28:20.109 "trsvcid": "8010", 00:28:20.109 "attach_timeout_ms": 3000, 00:28:20.109 "method": "bdev_nvme_start_discovery", 00:28:20.109 "req_id": 1 00:28:20.109 } 00:28:20.109 Got JSON-RPC error response 00:28:20.109 response: 00:28:20.109 { 00:28:20.109 "code": -110, 00:28:20.109 "message": "Connection timed out" 00:28:20.109 } 00:28:20.109 08:28:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:20.109 08:28:53 -- common/autotest_common.sh@641 -- # es=1 00:28:20.109 08:28:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:20.109 08:28:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:20.109 08:28:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:20.109 08:28:53 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:20.109 08:28:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:20.109 08:28:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:20.109 08:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.109 08:28:53 -- host/discovery.sh@67 -- # sort 00:28:20.109 08:28:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.109 08:28:53 -- host/discovery.sh@67 -- # xargs 00:28:20.109 08:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.109 08:28:53 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:20.109 08:28:53 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:20.109 08:28:53 -- host/discovery.sh@162 -- # kill 2416372 00:28:20.109 08:28:53 -- host/discovery.sh@163 -- # nvmftestfini 00:28:20.109 08:28:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:20.109 08:28:53 -- nvmf/common.sh@116 -- # sync 00:28:20.109 08:28:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:20.110 08:28:53 -- nvmf/common.sh@119 -- # set +e 00:28:20.110 08:28:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:20.110 08:28:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:20.110 rmmod nvme_tcp 00:28:20.110 rmmod nvme_fabrics 00:28:20.369 rmmod nvme_keyring 00:28:20.369 08:28:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:20.369 08:28:53 -- nvmf/common.sh@123 -- # set -e 00:28:20.369 08:28:53 -- nvmf/common.sh@124 -- # return 0 00:28:20.369 08:28:53 -- nvmf/common.sh@477 -- # '[' -n 2416263 ']' 00:28:20.369 08:28:53 -- nvmf/common.sh@478 -- # killprocess 2416263 00:28:20.369 08:28:53 -- common/autotest_common.sh@924 -- # '[' -z 2416263 ']' 00:28:20.369 08:28:53 -- common/autotest_common.sh@928 -- # kill -0 2416263 00:28:20.369 08:28:53 -- common/autotest_common.sh@929 -- # uname 00:28:20.369 08:28:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:20.369 08:28:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2416263 00:28:20.369 08:28:53 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:20.369 08:28:53 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:20.369 08:28:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2416263' 00:28:20.369 killing process with pid 2416263 00:28:20.369 08:28:53 -- common/autotest_common.sh@943 -- # kill 2416263 00:28:20.369 08:28:53 -- common/autotest_common.sh@948 -- # wait 2416263 00:28:20.629 08:28:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:20.629 08:28:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:20.629 08:28:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:20.630 08:28:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.630 08:28:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:20.630 08:28:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.630 08:28:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.630 08:28:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.536 08:28:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:22.536 00:28:22.536 real 0m20.766s 00:28:22.536 user 0m27.723s 00:28:22.536 sys 0m5.626s 00:28:22.536 08:28:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:22.536 08:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:22.536 ************************************ 00:28:22.536 END TEST nvmf_discovery 00:28:22.536 ************************************ 00:28:22.536 08:28:56 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:22.536 08:28:56 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:22.536 08:28:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:22.536 08:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:22.536 ************************************ 00:28:22.536 START TEST nvmf_discovery_remove_ifc 00:28:22.536 ************************************ 00:28:22.536 08:28:56 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:22.796 * Looking for test storage... 00:28:22.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.796 08:28:56 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.796 08:28:56 -- nvmf/common.sh@7 -- # uname -s 00:28:22.796 08:28:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.796 08:28:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.796 08:28:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.796 08:28:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.796 08:28:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.796 08:28:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.796 08:28:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.796 08:28:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.796 08:28:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.796 08:28:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.796 08:28:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:22.796 08:28:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:22.796 08:28:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.796 08:28:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.796 08:28:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.796 08:28:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.796 08:28:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.796 08:28:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.796 08:28:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.796 08:28:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.796 08:28:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.797 08:28:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.797 08:28:56 -- paths/export.sh@5 -- # export PATH 00:28:22.797 08:28:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.797 08:28:56 -- nvmf/common.sh@46 -- # : 0 00:28:22.797 08:28:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:22.797 08:28:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:22.797 08:28:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:22.797 08:28:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.797 08:28:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.797 08:28:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:22.797 08:28:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:22.797 08:28:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:22.797 08:28:56 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:22.797 08:28:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:22.797 08:28:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.797 08:28:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:22.797 08:28:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:22.797 08:28:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:22.797 08:28:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.797 08:28:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.797 08:28:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.797 08:28:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:22.797 08:28:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:22.797 08:28:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:22.797 08:28:56 -- common/autotest_common.sh@10 -- # set +x 00:28:28.072 08:29:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:28.072 08:29:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:28.072 08:29:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:28.072 08:29:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:28.072 08:29:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:28.072 08:29:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:28.072 08:29:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:28.072 08:29:01 -- nvmf/common.sh@294 -- # net_devs=() 00:28:28.072 08:29:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:28.072 08:29:01 -- nvmf/common.sh@295 -- # e810=() 00:28:28.072 08:29:01 -- nvmf/common.sh@295 -- # local -ga e810 00:28:28.072 08:29:01 -- nvmf/common.sh@296 -- # x722=() 00:28:28.072 08:29:01 -- nvmf/common.sh@296 -- # local -ga x722 00:28:28.072 08:29:01 -- nvmf/common.sh@297 -- # mlx=() 00:28:28.072 08:29:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:28.072 08:29:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.072 08:29:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:28.072 08:29:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:28.072 08:29:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.072 08:29:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:28.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:28.072 08:29:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.072 08:29:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:28.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:28.072 08:29:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.072 08:29:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.072 08:29:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.072 08:29:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:28.072 Found net devices under 0000:af:00.0: cvl_0_0 00:28:28.072 08:29:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.072 08:29:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.072 08:29:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.072 08:29:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.072 08:29:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:28.072 Found net devices under 0000:af:00.1: cvl_0_1 00:28:28.072 08:29:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.072 08:29:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:28.072 08:29:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:28.072 08:29:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:28.072 08:29:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.072 08:29:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.072 08:29:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.072 08:29:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:28.072 08:29:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.072 08:29:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.072 08:29:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:28.072 08:29:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.072 08:29:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.072 08:29:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:28.072 08:29:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:28.072 08:29:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.072 08:29:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.072 08:29:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.072 08:29:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.072 08:29:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:28.072 08:29:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.332 08:29:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.332 08:29:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.332 08:29:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:28.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:28:28.332 00:28:28.332 --- 10.0.0.2 ping statistics --- 00:28:28.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.332 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:28:28.332 08:29:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:28:28.332 00:28:28.332 --- 10.0.0.1 ping statistics --- 00:28:28.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.332 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:28:28.332 08:29:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.332 08:29:01 -- nvmf/common.sh@410 -- # return 0 00:28:28.332 08:29:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:28.332 08:29:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.332 08:29:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:28.332 08:29:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:28.332 08:29:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.332 08:29:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:28.332 08:29:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:28.332 08:29:01 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:28.332 08:29:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:28.332 08:29:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:28.332 08:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:28.332 08:29:01 -- nvmf/common.sh@469 -- # nvmfpid=2422174 00:28:28.332 08:29:01 -- nvmf/common.sh@470 -- # waitforlisten 2422174 00:28:28.332 08:29:01 -- common/autotest_common.sh@817 -- # '[' -z 2422174 ']' 00:28:28.332 08:29:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.332 08:29:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:28.332 08:29:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.332 08:29:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:28.332 08:29:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:28.332 08:29:01 -- common/autotest_common.sh@10 -- # set +x 00:28:28.332 [2024-02-13 08:29:01.915722] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:28.332 [2024-02-13 08:29:01.915764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.332 [2024-02-13 08:29:01.976199] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.591 [2024-02-13 08:29:02.051760] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:28.591 [2024-02-13 08:29:02.051862] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.591 [2024-02-13 08:29:02.051869] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.591 [2024-02-13 08:29:02.051875] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.591 [2024-02-13 08:29:02.051896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.160 08:29:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:29.160 08:29:02 -- common/autotest_common.sh@850 -- # return 0 00:28:29.160 08:29:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:29.160 08:29:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:29.160 08:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:29.160 08:29:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.160 08:29:02 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:29.160 08:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.160 08:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:29.160 [2024-02-13 08:29:02.757764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.160 [2024-02-13 08:29:02.765889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:29.160 null0 00:28:29.160 [2024-02-13 08:29:02.797895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.160 08:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.160 08:29:02 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2422417 00:28:29.160 08:29:02 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:29.160 08:29:02 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2422417 /tmp/host.sock 00:28:29.160 08:29:02 -- common/autotest_common.sh@817 -- # '[' -z 2422417 ']' 00:28:29.160 08:29:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:28:29.160 08:29:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:29.160 08:29:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:29.160 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:29.160 08:29:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:29.160 08:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:29.419 [2024-02-13 08:29:02.862198] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:29.419 [2024-02-13 08:29:02.862237] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422417 ] 00:28:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.419 [2024-02-13 08:29:02.920828] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.419 [2024-02-13 08:29:02.990309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:29.419 [2024-02-13 08:29:02.990427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.988 08:29:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:29.988 08:29:03 -- common/autotest_common.sh@850 -- # return 0 00:28:29.988 08:29:03 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.988 08:29:03 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:29.988 08:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.988 08:29:03 -- common/autotest_common.sh@10 -- # set +x 00:28:29.988 08:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.988 08:29:03 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:29.988 08:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.988 08:29:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.248 08:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:30.248 08:29:03 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:30.248 08:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:30.248 08:29:03 -- common/autotest_common.sh@10 -- # set +x 00:28:31.186 [2024-02-13 08:29:04.746211] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:31.186 [2024-02-13 08:29:04.746231] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:31.186 [2024-02-13 08:29:04.746244] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:31.186 [2024-02-13 08:29:04.834509] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:31.446 [2024-02-13 08:29:05.019827] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:31.446 [2024-02-13 08:29:05.019864] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:31.446 [2024-02-13 08:29:05.019885] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:31.446 [2024-02-13 08:29:05.019897] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:31.446 [2024-02-13 08:29:05.019917] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:31.446 08:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.446 08:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.446 [2024-02-13 08:29:05.025771] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe50490 was disconnected and freed. delete nvme_qpair. 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:31.446 08:29:05 -- common/autotest_common.sh@10 -- # set +x 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:31.446 08:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:31.446 08:29:05 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.705 08:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:31.705 08:29:05 -- common/autotest_common.sh@10 -- # set +x 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:31.705 08:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.705 08:29:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.643 08:29:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.643 08:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.643 08:29:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:32.643 08:29:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:33.580 08:29:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.580 08:29:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.580 08:29:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.580 08:29:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.580 08:29:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:33.580 08:29:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.580 08:29:07 -- common/autotest_common.sh@10 -- # set +x 00:28:33.840 08:29:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:33.840 08:29:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:33.840 08:29:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.777 08:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.777 08:29:08 -- common/autotest_common.sh@10 -- # set +x 00:28:34.777 08:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:34.777 08:29:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:35.715 08:29:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.715 08:29:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.715 08:29:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.715 08:29:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.715 08:29:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.715 08:29:09 -- common/autotest_common.sh@10 -- # set +x 00:28:35.715 08:29:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.715 08:29:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.975 08:29:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:35.975 08:29:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.923 08:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.923 08:29:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.923 08:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.923 08:29:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.923 [2024-02-13 08:29:10.461225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:36.923 [2024-02-13 08:29:10.461271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.923 [2024-02-13 08:29:10.461282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.923 [2024-02-13 08:29:10.461292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.923 [2024-02-13 08:29:10.461299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.923 [2024-02-13 08:29:10.461306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.923 [2024-02-13 08:29:10.461313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.924 [2024-02-13 08:29:10.461320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.924 [2024-02-13 08:29:10.461327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.924 [2024-02-13 08:29:10.461334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:36.924 [2024-02-13 08:29:10.461340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:36.924 [2024-02-13 08:29:10.461346] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16870 is same with the state(5) to be set 00:28:36.924 [2024-02-13 08:29:10.471245] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe16870 (9): Bad file descriptor 00:28:36.924 [2024-02-13 08:29:10.481285] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:37.862 08:29:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.862 08:29:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.862 08:29:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.862 08:29:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:37.862 08:29:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.862 08:29:11 -- common/autotest_common.sh@10 -- # set +x 00:28:37.862 08:29:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.862 [2024-02-13 08:29:11.520730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:39.242 [2024-02-13 08:29:12.544733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:39.242 [2024-02-13 08:29:12.544782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16870 with addr=10.0.0.2, port=4420 00:28:39.242 [2024-02-13 08:29:12.544801] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe16870 is same with the state(5) to be set 00:28:39.242 [2024-02-13 08:29:12.545244] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe16870 (9): Bad file descriptor 00:28:39.242 [2024-02-13 08:29:12.545276] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.242 [2024-02-13 08:29:12.545304] bdev_nvme.c:6455:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:39.242 [2024-02-13 08:29:12.545334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.242 [2024-02-13 08:29:12.545348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.242 [2024-02-13 08:29:12.545362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.242 [2024-02-13 08:29:12.545378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.242 [2024-02-13 08:29:12.545390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.242 [2024-02-13 08:29:12.545401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.242 [2024-02-13 08:29:12.545412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.242 [2024-02-13 08:29:12.545422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.242 [2024-02-13 08:29:12.545434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:39.242 [2024-02-13 08:29:12.545444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.242 [2024-02-13 08:29:12.545454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:39.242 [2024-02-13 08:29:12.545833] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe16c80 (9): Bad file descriptor 00:28:39.242 [2024-02-13 08:29:12.546847] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:39.242 [2024-02-13 08:29:12.546863] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:39.242 08:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:39.242 08:29:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.242 08:29:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.180 08:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.180 08:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:40.180 08:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.180 08:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.180 08:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.180 08:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:40.180 08:29:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.118 [2024-02-13 08:29:14.602387] bdev_nvme.c:6704:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:41.118 [2024-02-13 08:29:14.602405] bdev_nvme.c:6784:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:41.118 [2024-02-13 08:29:14.602419] bdev_nvme.c:6667:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:41.118 [2024-02-13 08:29:14.731817] bdev_nvme.c:6633:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.118 08:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.118 08:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.118 08:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:41.118 08:29:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.378 [2024-02-13 08:29:14.832288] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:41.378 [2024-02-13 08:29:14.832321] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:41.378 [2024-02-13 08:29:14.832337] bdev_nvme.c:7493:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:41.378 [2024-02-13 08:29:14.832350] bdev_nvme.c:6523:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:41.378 [2024-02-13 08:29:14.832358] bdev_nvme.c:6482:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:41.378 [2024-02-13 08:29:14.840989] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe23690 was disconnected and freed. delete nvme_qpair. 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.315 08:29:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.315 08:29:15 -- common/autotest_common.sh@10 -- # set +x 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.315 08:29:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:42.315 08:29:15 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2422417 00:28:42.315 08:29:15 -- common/autotest_common.sh@924 -- # '[' -z 2422417 ']' 00:28:42.315 08:29:15 -- common/autotest_common.sh@928 -- # kill -0 2422417 00:28:42.315 08:29:15 -- common/autotest_common.sh@929 -- # uname 00:28:42.315 08:29:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:42.315 08:29:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2422417 00:28:42.315 08:29:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:42.315 08:29:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:42.315 08:29:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2422417' 00:28:42.315 killing process with pid 2422417 00:28:42.315 08:29:15 -- common/autotest_common.sh@943 -- # kill 2422417 00:28:42.315 08:29:15 -- common/autotest_common.sh@948 -- # wait 2422417 00:28:42.575 08:29:16 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:42.575 08:29:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:42.575 08:29:16 -- nvmf/common.sh@116 -- # sync 00:28:42.575 08:29:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:42.575 08:29:16 -- nvmf/common.sh@119 -- # set +e 00:28:42.575 08:29:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:42.575 08:29:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:42.575 rmmod nvme_tcp 00:28:42.575 rmmod nvme_fabrics 00:28:42.575 rmmod nvme_keyring 00:28:42.575 08:29:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:42.575 08:29:16 -- nvmf/common.sh@123 -- # set -e 00:28:42.575 08:29:16 -- nvmf/common.sh@124 -- # return 0 00:28:42.575 08:29:16 -- nvmf/common.sh@477 -- # '[' -n 2422174 ']' 00:28:42.575 08:29:16 -- nvmf/common.sh@478 -- # killprocess 2422174 00:28:42.575 08:29:16 -- common/autotest_common.sh@924 -- # '[' -z 2422174 ']' 00:28:42.575 08:29:16 -- common/autotest_common.sh@928 -- # kill -0 2422174 00:28:42.575 08:29:16 -- common/autotest_common.sh@929 -- # uname 00:28:42.575 08:29:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:42.575 08:29:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2422174 00:28:42.575 08:29:16 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:42.575 08:29:16 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:42.575 08:29:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2422174' 00:28:42.575 killing process with pid 2422174 00:28:42.575 08:29:16 -- common/autotest_common.sh@943 -- # kill 2422174 00:28:42.575 08:29:16 -- common/autotest_common.sh@948 -- # wait 2422174 00:28:42.834 08:29:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:42.834 08:29:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:42.834 08:29:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:42.834 08:29:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.834 08:29:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:42.834 08:29:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.834 08:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.834 08:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.372 08:29:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:45.372 00:28:45.372 real 0m22.233s 00:28:45.372 user 0m27.486s 00:28:45.372 sys 0m5.451s 00:28:45.372 08:29:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:45.372 08:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:45.372 ************************************ 00:28:45.372 END TEST nvmf_discovery_remove_ifc 00:28:45.372 ************************************ 00:28:45.372 08:29:18 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:28:45.372 08:29:18 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:45.372 08:29:18 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:45.372 08:29:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:45.372 08:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:45.372 ************************************ 00:28:45.372 START TEST nvmf_digest 00:28:45.372 ************************************ 00:28:45.372 08:29:18 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:45.372 * Looking for test storage... 00:28:45.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:45.372 08:29:18 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.372 08:29:18 -- nvmf/common.sh@7 -- # uname -s 00:28:45.372 08:29:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.372 08:29:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.372 08:29:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.372 08:29:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.372 08:29:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.373 08:29:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.373 08:29:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.373 08:29:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.373 08:29:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.373 08:29:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.373 08:29:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:45.373 08:29:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:45.373 08:29:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.373 08:29:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.373 08:29:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.373 08:29:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.373 08:29:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.373 08:29:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.373 08:29:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.373 08:29:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.373 08:29:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.373 08:29:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.373 08:29:18 -- paths/export.sh@5 -- # export PATH 00:28:45.373 08:29:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.373 08:29:18 -- nvmf/common.sh@46 -- # : 0 00:28:45.373 08:29:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:45.373 08:29:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:45.373 08:29:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:45.373 08:29:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.373 08:29:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.373 08:29:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:45.373 08:29:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:45.373 08:29:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:45.373 08:29:18 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:45.373 08:29:18 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:45.373 08:29:18 -- host/digest.sh@16 -- # runtime=2 00:28:45.373 08:29:18 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:45.373 08:29:18 -- host/digest.sh@132 -- # nvmftestinit 00:28:45.373 08:29:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:45.373 08:29:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.373 08:29:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:45.373 08:29:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:45.373 08:29:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:45.373 08:29:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.373 08:29:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.373 08:29:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.373 08:29:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:45.373 08:29:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:45.373 08:29:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:45.373 08:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:51.945 08:29:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:51.945 08:29:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:51.945 08:29:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:51.945 08:29:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:51.945 08:29:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:51.945 08:29:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:51.945 08:29:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:51.945 08:29:24 -- nvmf/common.sh@294 -- # net_devs=() 00:28:51.945 08:29:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:51.945 08:29:24 -- nvmf/common.sh@295 -- # e810=() 00:28:51.945 08:29:24 -- nvmf/common.sh@295 -- # local -ga e810 00:28:51.945 08:29:24 -- nvmf/common.sh@296 -- # x722=() 00:28:51.945 08:29:24 -- nvmf/common.sh@296 -- # local -ga x722 00:28:51.945 08:29:24 -- nvmf/common.sh@297 -- # mlx=() 00:28:51.945 08:29:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:51.945 08:29:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.945 08:29:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:51.945 08:29:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:51.945 08:29:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:51.945 08:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:51.945 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:51.945 08:29:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:51.945 08:29:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:51.945 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:51.945 08:29:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:51.945 08:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.945 08:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.945 08:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:51.945 Found net devices under 0000:af:00.0: cvl_0_0 00:28:51.945 08:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.945 08:29:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:51.945 08:29:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.945 08:29:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.945 08:29:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:51.945 Found net devices under 0000:af:00.1: cvl_0_1 00:28:51.945 08:29:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.945 08:29:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:51.945 08:29:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:51.945 08:29:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:51.945 08:29:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.945 08:29:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.945 08:29:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.945 08:29:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:51.945 08:29:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.945 08:29:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.945 08:29:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:51.945 08:29:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.945 08:29:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.945 08:29:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:51.945 08:29:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:51.945 08:29:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.945 08:29:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.945 08:29:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.945 08:29:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.945 08:29:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:51.945 08:29:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.945 08:29:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.945 08:29:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.945 08:29:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:51.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:28:51.945 00:28:51.945 --- 10.0.0.2 ping statistics --- 00:28:51.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.945 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:51.945 08:29:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:28:51.945 00:28:51.945 --- 10.0.0.1 ping statistics --- 00:28:51.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.945 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:51.946 08:29:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.946 08:29:24 -- nvmf/common.sh@410 -- # return 0 00:28:51.946 08:29:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:51.946 08:29:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.946 08:29:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:51.946 08:29:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:51.946 08:29:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.946 08:29:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:51.946 08:29:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:51.946 08:29:24 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:51.946 08:29:24 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:51.946 08:29:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:51.946 08:29:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:51.946 08:29:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.946 ************************************ 00:28:51.946 START TEST nvmf_digest_clean 00:28:51.946 ************************************ 00:28:51.946 08:29:24 -- common/autotest_common.sh@1102 -- # run_digest 00:28:51.946 08:29:24 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:51.946 08:29:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:51.946 08:29:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:51.946 08:29:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.946 08:29:24 -- nvmf/common.sh@469 -- # nvmfpid=2428428 00:28:51.946 08:29:24 -- nvmf/common.sh@470 -- # waitforlisten 2428428 00:28:51.946 08:29:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:51.946 08:29:24 -- common/autotest_common.sh@817 -- # '[' -z 2428428 ']' 00:28:51.946 08:29:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.946 08:29:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:51.946 08:29:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.946 08:29:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:51.946 08:29:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.946 [2024-02-13 08:29:24.974018] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:51.946 [2024-02-13 08:29:24.974062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.946 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.946 [2024-02-13 08:29:25.036098] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.946 [2024-02-13 08:29:25.111196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:51.946 [2024-02-13 08:29:25.111299] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.946 [2024-02-13 08:29:25.111307] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.946 [2024-02-13 08:29:25.111316] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.946 [2024-02-13 08:29:25.111336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.205 08:29:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:52.205 08:29:25 -- common/autotest_common.sh@850 -- # return 0 00:28:52.205 08:29:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:52.205 08:29:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:52.205 08:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.205 08:29:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.205 08:29:25 -- host/digest.sh@120 -- # common_target_config 00:28:52.205 08:29:25 -- host/digest.sh@43 -- # rpc_cmd 00:28:52.205 08:29:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.205 08:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.205 null0 00:28:52.205 [2024-02-13 08:29:25.877568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.465 [2024-02-13 08:29:25.901754] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.465 08:29:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.465 08:29:25 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:52.465 08:29:25 -- host/digest.sh@77 -- # local rw bs qd 00:28:52.465 08:29:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:52.465 08:29:25 -- host/digest.sh@80 -- # rw=randread 00:28:52.465 08:29:25 -- host/digest.sh@80 -- # bs=4096 00:28:52.465 08:29:25 -- host/digest.sh@80 -- # qd=128 00:28:52.465 08:29:25 -- host/digest.sh@82 -- # bperfpid=2428616 00:28:52.465 08:29:25 -- host/digest.sh@83 -- # waitforlisten 2428616 /var/tmp/bperf.sock 00:28:52.465 08:29:25 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:52.465 08:29:25 -- common/autotest_common.sh@817 -- # '[' -z 2428616 ']' 00:28:52.465 08:29:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.465 08:29:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:52.465 08:29:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.465 08:29:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:52.465 08:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.465 [2024-02-13 08:29:25.948817] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:52.465 [2024-02-13 08:29:25.948863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428616 ] 00:28:52.465 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.465 [2024-02-13 08:29:26.007289] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.465 [2024-02-13 08:29:26.078815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.402 08:29:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:53.402 08:29:26 -- common/autotest_common.sh@850 -- # return 0 00:28:53.402 08:29:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:53.402 08:29:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:53.402 08:29:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:53.402 08:29:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.403 08:29:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.662 nvme0n1 00:28:53.662 08:29:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:53.662 08:29:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.662 Running I/O for 2 seconds... 00:28:56.199 00:28:56.199 Latency(us) 00:28:56.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.199 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:56.199 nvme0n1 : 2.00 28488.81 111.28 0.00 0.00 4488.97 2012.89 15603.81 00:28:56.199 =================================================================================================================== 00:28:56.199 Total : 28488.81 111.28 0.00 0.00 4488.97 2012.89 15603.81 00:28:56.199 0 00:28:56.199 08:29:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:56.199 08:29:29 -- host/digest.sh@92 -- # get_accel_stats 00:28:56.199 08:29:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.199 08:29:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.199 | select(.opcode=="crc32c") 00:28:56.199 | "\(.module_name) \(.executed)"' 00:28:56.199 08:29:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.199 08:29:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:56.199 08:29:29 -- host/digest.sh@93 -- # exp_module=software 00:28:56.199 08:29:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:56.199 08:29:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:56.199 08:29:29 -- host/digest.sh@97 -- # killprocess 2428616 00:28:56.200 08:29:29 -- common/autotest_common.sh@924 -- # '[' -z 2428616 ']' 00:28:56.200 08:29:29 -- common/autotest_common.sh@928 -- # kill -0 2428616 00:28:56.200 08:29:29 -- common/autotest_common.sh@929 -- # uname 00:28:56.200 08:29:29 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:56.200 08:29:29 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2428616 00:28:56.200 08:29:29 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:28:56.200 08:29:29 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:28:56.200 08:29:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2428616' 00:28:56.200 killing process with pid 2428616 00:28:56.200 08:29:29 -- common/autotest_common.sh@943 -- # kill 2428616 00:28:56.200 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.200 00:28:56.200 Latency(us) 00:28:56.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.200 =================================================================================================================== 00:28:56.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.200 08:29:29 -- common/autotest_common.sh@948 -- # wait 2428616 00:28:56.200 08:29:29 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:28:56.200 08:29:29 -- host/digest.sh@77 -- # local rw bs qd 00:28:56.200 08:29:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:56.200 08:29:29 -- host/digest.sh@80 -- # rw=randread 00:28:56.200 08:29:29 -- host/digest.sh@80 -- # bs=131072 00:28:56.200 08:29:29 -- host/digest.sh@80 -- # qd=16 00:28:56.200 08:29:29 -- host/digest.sh@82 -- # bperfpid=2429309 00:28:56.200 08:29:29 -- host/digest.sh@83 -- # waitforlisten 2429309 /var/tmp/bperf.sock 00:28:56.200 08:29:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:56.200 08:29:29 -- common/autotest_common.sh@817 -- # '[' -z 2429309 ']' 00:28:56.200 08:29:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.200 08:29:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.200 08:29:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.200 08:29:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.200 08:29:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.200 [2024-02-13 08:29:29.827636] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:56.200 [2024-02-13 08:29:29.827696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429309 ] 00:28:56.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.200 Zero copy mechanism will not be used. 00:28:56.200 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.459 [2024-02-13 08:29:29.887941] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.459 [2024-02-13 08:29:29.964469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.028 08:29:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.028 08:29:30 -- common/autotest_common.sh@850 -- # return 0 00:28:57.028 08:29:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:57.028 08:29:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:57.028 08:29:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:57.287 08:29:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.287 08:29:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.546 nvme0n1 00:28:57.546 08:29:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:57.546 08:29:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.806 Zero copy mechanism will not be used. 00:28:57.806 Running I/O for 2 seconds... 00:28:59.712 00:28:59.712 Latency(us) 00:28:59.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.712 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:59.712 nvme0n1 : 2.00 3302.20 412.78 0.00 0.00 4842.40 3994.58 20846.69 00:28:59.712 =================================================================================================================== 00:28:59.712 Total : 3302.20 412.78 0.00 0.00 4842.40 3994.58 20846.69 00:28:59.712 0 00:28:59.712 08:29:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:59.712 08:29:33 -- host/digest.sh@92 -- # get_accel_stats 00:28:59.712 08:29:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:59.712 08:29:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:59.712 08:29:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:59.712 | select(.opcode=="crc32c") 00:28:59.712 | "\(.module_name) \(.executed)"' 00:29:00.004 08:29:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:00.004 08:29:33 -- host/digest.sh@93 -- # exp_module=software 00:29:00.004 08:29:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:00.004 08:29:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:00.004 08:29:33 -- host/digest.sh@97 -- # killprocess 2429309 00:29:00.004 08:29:33 -- common/autotest_common.sh@924 -- # '[' -z 2429309 ']' 00:29:00.004 08:29:33 -- common/autotest_common.sh@928 -- # kill -0 2429309 00:29:00.004 08:29:33 -- common/autotest_common.sh@929 -- # uname 00:29:00.004 08:29:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:00.004 08:29:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2429309 00:29:00.004 08:29:33 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:00.004 08:29:33 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:00.004 08:29:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2429309' 00:29:00.004 killing process with pid 2429309 00:29:00.004 08:29:33 -- common/autotest_common.sh@943 -- # kill 2429309 00:29:00.004 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.004 00:29:00.004 Latency(us) 00:29:00.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.004 =================================================================================================================== 00:29:00.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.004 08:29:33 -- common/autotest_common.sh@948 -- # wait 2429309 00:29:00.273 08:29:33 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:00.273 08:29:33 -- host/digest.sh@77 -- # local rw bs qd 00:29:00.273 08:29:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.273 08:29:33 -- host/digest.sh@80 -- # rw=randwrite 00:29:00.273 08:29:33 -- host/digest.sh@80 -- # bs=4096 00:29:00.273 08:29:33 -- host/digest.sh@80 -- # qd=128 00:29:00.273 08:29:33 -- host/digest.sh@82 -- # bperfpid=2429972 00:29:00.273 08:29:33 -- host/digest.sh@83 -- # waitforlisten 2429972 /var/tmp/bperf.sock 00:29:00.273 08:29:33 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:00.273 08:29:33 -- common/autotest_common.sh@817 -- # '[' -z 2429972 ']' 00:29:00.273 08:29:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.273 08:29:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:00.273 08:29:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.273 08:29:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:00.273 08:29:33 -- common/autotest_common.sh@10 -- # set +x 00:29:00.273 [2024-02-13 08:29:33.729819] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:00.273 [2024-02-13 08:29:33.729870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429972 ] 00:29:00.273 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.273 [2024-02-13 08:29:33.788521] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.273 [2024-02-13 08:29:33.864521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.211 08:29:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:01.212 08:29:34 -- common/autotest_common.sh@850 -- # return 0 00:29:01.212 08:29:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:01.212 08:29:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:01.212 08:29:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.212 08:29:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.212 08:29:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.471 nvme0n1 00:29:01.471 08:29:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:01.471 08:29:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.471 Running I/O for 2 seconds... 00:29:04.010 00:29:04.010 Latency(us) 00:29:04.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.010 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.010 nvme0n1 : 2.00 29014.04 113.34 0.00 0.00 4404.35 2137.72 18849.40 00:29:04.010 =================================================================================================================== 00:29:04.010 Total : 29014.04 113.34 0.00 0.00 4404.35 2137.72 18849.40 00:29:04.010 0 00:29:04.010 08:29:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:04.010 08:29:37 -- host/digest.sh@92 -- # get_accel_stats 00:29:04.010 08:29:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.010 08:29:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.010 | select(.opcode=="crc32c") 00:29:04.010 | "\(.module_name) \(.executed)"' 00:29:04.010 08:29:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.010 08:29:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:04.010 08:29:37 -- host/digest.sh@93 -- # exp_module=software 00:29:04.010 08:29:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:04.010 08:29:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.010 08:29:37 -- host/digest.sh@97 -- # killprocess 2429972 00:29:04.010 08:29:37 -- common/autotest_common.sh@924 -- # '[' -z 2429972 ']' 00:29:04.010 08:29:37 -- common/autotest_common.sh@928 -- # kill -0 2429972 00:29:04.010 08:29:37 -- common/autotest_common.sh@929 -- # uname 00:29:04.010 08:29:37 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:04.010 08:29:37 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2429972 00:29:04.010 08:29:37 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:04.010 08:29:37 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:04.010 08:29:37 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2429972' 00:29:04.010 killing process with pid 2429972 00:29:04.010 08:29:37 -- common/autotest_common.sh@943 -- # kill 2429972 00:29:04.010 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.010 00:29:04.010 Latency(us) 00:29:04.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.010 =================================================================================================================== 00:29:04.010 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.010 08:29:37 -- common/autotest_common.sh@948 -- # wait 2429972 00:29:04.010 08:29:37 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:04.010 08:29:37 -- host/digest.sh@77 -- # local rw bs qd 00:29:04.010 08:29:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.010 08:29:37 -- host/digest.sh@80 -- # rw=randwrite 00:29:04.010 08:29:37 -- host/digest.sh@80 -- # bs=131072 00:29:04.010 08:29:37 -- host/digest.sh@80 -- # qd=16 00:29:04.010 08:29:37 -- host/digest.sh@82 -- # bperfpid=2430496 00:29:04.010 08:29:37 -- host/digest.sh@83 -- # waitforlisten 2430496 /var/tmp/bperf.sock 00:29:04.010 08:29:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:04.010 08:29:37 -- common/autotest_common.sh@817 -- # '[' -z 2430496 ']' 00:29:04.010 08:29:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.010 08:29:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:04.010 08:29:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.010 08:29:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:04.010 08:29:37 -- common/autotest_common.sh@10 -- # set +x 00:29:04.010 [2024-02-13 08:29:37.599017] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:04.010 [2024-02-13 08:29:37.599064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430496 ] 00:29:04.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.010 Zero copy mechanism will not be used. 00:29:04.010 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.010 [2024-02-13 08:29:37.659432] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.269 [2024-02-13 08:29:37.727469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.838 08:29:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:04.838 08:29:38 -- common/autotest_common.sh@850 -- # return 0 00:29:04.838 08:29:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:04.838 08:29:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:04.838 08:29:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.097 08:29:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.098 08:29:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.357 nvme0n1 00:29:05.357 08:29:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:05.357 08:29:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.357 Zero copy mechanism will not be used. 00:29:05.357 Running I/O for 2 seconds... 00:29:07.892 00:29:07.892 Latency(us) 00:29:07.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.892 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:07.892 nvme0n1 : 2.01 2986.62 373.33 0.00 0.00 5347.84 3682.50 22594.32 00:29:07.892 =================================================================================================================== 00:29:07.892 Total : 2986.62 373.33 0.00 0.00 5347.84 3682.50 22594.32 00:29:07.892 0 00:29:07.892 08:29:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:07.892 08:29:41 -- host/digest.sh@92 -- # get_accel_stats 00:29:07.892 08:29:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.892 08:29:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.892 | select(.opcode=="crc32c") 00:29:07.892 | "\(.module_name) \(.executed)"' 00:29:07.892 08:29:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.892 08:29:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:07.892 08:29:41 -- host/digest.sh@93 -- # exp_module=software 00:29:07.892 08:29:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:07.892 08:29:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.892 08:29:41 -- host/digest.sh@97 -- # killprocess 2430496 00:29:07.892 08:29:41 -- common/autotest_common.sh@924 -- # '[' -z 2430496 ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@928 -- # kill -0 2430496 00:29:07.892 08:29:41 -- common/autotest_common.sh@929 -- # uname 00:29:07.892 08:29:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2430496 00:29:07.892 08:29:41 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:07.892 08:29:41 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2430496' 00:29:07.892 killing process with pid 2430496 00:29:07.892 08:29:41 -- common/autotest_common.sh@943 -- # kill 2430496 00:29:07.892 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.892 00:29:07.892 Latency(us) 00:29:07.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.892 =================================================================================================================== 00:29:07.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.892 08:29:41 -- common/autotest_common.sh@948 -- # wait 2430496 00:29:07.892 08:29:41 -- host/digest.sh@126 -- # killprocess 2428428 00:29:07.892 08:29:41 -- common/autotest_common.sh@924 -- # '[' -z 2428428 ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@928 -- # kill -0 2428428 00:29:07.892 08:29:41 -- common/autotest_common.sh@929 -- # uname 00:29:07.892 08:29:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2428428 00:29:07.892 08:29:41 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:07.892 08:29:41 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:07.892 08:29:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2428428' 00:29:07.892 killing process with pid 2428428 00:29:07.893 08:29:41 -- common/autotest_common.sh@943 -- # kill 2428428 00:29:07.893 08:29:41 -- common/autotest_common.sh@948 -- # wait 2428428 00:29:08.152 00:29:08.152 real 0m16.797s 00:29:08.152 user 0m32.497s 00:29:08.152 sys 0m4.001s 00:29:08.152 08:29:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:08.152 08:29:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.152 ************************************ 00:29:08.152 END TEST nvmf_digest_clean 00:29:08.152 ************************************ 00:29:08.152 08:29:41 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:08.152 08:29:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:08.152 08:29:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:08.152 08:29:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.152 ************************************ 00:29:08.152 START TEST nvmf_digest_error 00:29:08.152 ************************************ 00:29:08.152 08:29:41 -- common/autotest_common.sh@1102 -- # run_digest_error 00:29:08.152 08:29:41 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:08.152 08:29:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:08.152 08:29:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:08.152 08:29:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.152 08:29:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:08.152 08:29:41 -- nvmf/common.sh@469 -- # nvmfpid=2431225 00:29:08.152 08:29:41 -- nvmf/common.sh@470 -- # waitforlisten 2431225 00:29:08.152 08:29:41 -- common/autotest_common.sh@817 -- # '[' -z 2431225 ']' 00:29:08.152 08:29:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.152 08:29:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:08.152 08:29:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.152 08:29:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:08.152 08:29:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.152 [2024-02-13 08:29:41.796623] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:08.152 [2024-02-13 08:29:41.796676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.152 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.411 [2024-02-13 08:29:41.858105] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.411 [2024-02-13 08:29:41.933387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:08.411 [2024-02-13 08:29:41.933492] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.411 [2024-02-13 08:29:41.933499] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.411 [2024-02-13 08:29:41.933506] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.411 [2024-02-13 08:29:41.933521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.980 08:29:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:08.980 08:29:42 -- common/autotest_common.sh@850 -- # return 0 00:29:08.980 08:29:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:08.980 08:29:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:08.980 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:29:08.980 08:29:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.980 08:29:42 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:08.980 08:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.980 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:29:08.980 [2024-02-13 08:29:42.631544] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:08.980 08:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.980 08:29:42 -- host/digest.sh@104 -- # common_target_config 00:29:08.980 08:29:42 -- host/digest.sh@43 -- # rpc_cmd 00:29:08.980 08:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.980 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:29:09.239 null0 00:29:09.239 [2024-02-13 08:29:42.723191] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.239 [2024-02-13 08:29:42.747369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.239 08:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.239 08:29:42 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:09.239 08:29:42 -- host/digest.sh@54 -- # local rw bs qd 00:29:09.239 08:29:42 -- host/digest.sh@56 -- # rw=randread 00:29:09.239 08:29:42 -- host/digest.sh@56 -- # bs=4096 00:29:09.239 08:29:42 -- host/digest.sh@56 -- # qd=128 00:29:09.239 08:29:42 -- host/digest.sh@58 -- # bperfpid=2431467 00:29:09.239 08:29:42 -- host/digest.sh@60 -- # waitforlisten 2431467 /var/tmp/bperf.sock 00:29:09.239 08:29:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:09.239 08:29:42 -- common/autotest_common.sh@817 -- # '[' -z 2431467 ']' 00:29:09.239 08:29:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.239 08:29:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.239 08:29:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.239 08:29:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.239 08:29:42 -- common/autotest_common.sh@10 -- # set +x 00:29:09.239 [2024-02-13 08:29:42.795548] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:09.239 [2024-02-13 08:29:42.795591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431467 ] 00:29:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.239 [2024-02-13 08:29:42.854275] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.498 [2024-02-13 08:29:42.931114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.067 08:29:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:10.067 08:29:43 -- common/autotest_common.sh@850 -- # return 0 00:29:10.067 08:29:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.067 08:29:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.067 08:29:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:10.067 08:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.067 08:29:43 -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 08:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.067 08:29:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.067 08:29:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.636 nvme0n1 00:29:10.636 08:29:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:10.636 08:29:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.636 08:29:44 -- common/autotest_common.sh@10 -- # set +x 00:29:10.636 08:29:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.636 08:29:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:10.636 08:29:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.636 Running I/O for 2 seconds... 00:29:10.636 [2024-02-13 08:29:44.182788] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.182819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.182830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.191881] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.191903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.191912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.200251] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.200271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.200280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.208620] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.208641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.208656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.217522] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.217543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.226022] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.226042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.234297] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.234316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.234325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.636 [2024-02-13 08:29:44.242885] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.636 [2024-02-13 08:29:44.242904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.636 [2024-02-13 08:29:44.242912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.251252] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.251272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.251280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.259663] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.259683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.259691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.268565] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.268584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.268596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.276799] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.276819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.276826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.285191] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.285210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.285218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.294103] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.294121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.294129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.302240] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.302259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.302267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.310759] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.310778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.310786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.637 [2024-02-13 08:29:44.318977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.637 [2024-02-13 08:29:44.318996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.637 [2024-02-13 08:29:44.319005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.327825] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.327845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.336346] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.336374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.344470] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.344501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.353481] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.353508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.361601] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.361621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.370186] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.370205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.370213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.379305] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.379325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.379333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.387687] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.387706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.387714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.395852] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.395871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.404069] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.404088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.404096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.412964] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.412983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.412991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.421241] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.421260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.421268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.429719] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.429738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.429746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.438440] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.438458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.438466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.446989] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.447008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.447017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.455041] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.455060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.463663] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.463683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.463691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.471984] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.472003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.472011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.480245] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.480265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.480273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.488876] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.488898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.488906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.496989] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.497008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.497016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.505396] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.505416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.505425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.514344] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.514362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.514370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.522242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.522261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.522269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.530503] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.898 [2024-02-13 08:29:44.530521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.898 [2024-02-13 08:29:44.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.898 [2024-02-13 08:29:44.539278] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.539297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.539306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.899 [2024-02-13 08:29:44.547437] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.547456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.547464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.899 [2024-02-13 08:29:44.555847] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.555874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.899 [2024-02-13 08:29:44.564441] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.564460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.564468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.899 [2024-02-13 08:29:44.572815] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.572833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.572841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.899 [2024-02-13 08:29:44.581043] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:10.899 [2024-02-13 08:29:44.581062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.899 [2024-02-13 08:29:44.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.590017] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.590036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.590044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.598251] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.598269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.598277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.607333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.607352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.607360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.615419] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.615437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.615445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.623708] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.623727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.623735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.159 [2024-02-13 08:29:44.632717] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.159 [2024-02-13 08:29:44.632736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.159 [2024-02-13 08:29:44.632747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.640765] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.640784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.640791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.649012] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.649031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.649040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.657452] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.657471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.657478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.665998] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.666017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.666025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.674028] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.674048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.674056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.682783] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.682801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.682809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.691035] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.691054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.691062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.699274] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.699293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.699301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.708255] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.708276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.708283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.716464] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.716483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.716491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.725073] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.725092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.725100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.733313] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.733333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.733341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.741556] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.741576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.741584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.750331] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.750358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.758939] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.758959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.758967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.767245] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.767274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.775408] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.775429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.775437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.784783] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.784804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.784812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.793106] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.793127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.793135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.802239] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.802258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.802266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.810011] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.810038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.818905] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.818924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.818932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.827086] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.827106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.827114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.836022] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.836041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.836048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.160 [2024-02-13 08:29:44.844361] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.160 [2024-02-13 08:29:44.844380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.160 [2024-02-13 08:29:44.844389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.852829] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.852848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.852858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.861158] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.861178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.861186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.869957] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.869976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.869984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.878315] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.878344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.886538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.886558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.886566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.895270] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.895297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.903575] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.912045] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.912064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.920787] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.920806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.920815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.928935] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.928958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.928966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.937008] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.937026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.937034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.945928] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.945948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.954318] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.954337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.954345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.962728] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.962748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.962756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.971523] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.971551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.979596] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.421 [2024-02-13 08:29:44.979615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.421 [2024-02-13 08:29:44.979623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.421 [2024-02-13 08:29:44.987956] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:44.987976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:44.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:44.996578] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:44.996598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:44.996606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.004953] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.004972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.004980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.013128] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.013148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.013156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.022037] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.022057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.022064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.030772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.030792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.038598] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.038618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.038626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.047652] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.056094] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.056114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.056122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.064615] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.064634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.064643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.072892] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.081218] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.081237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.081245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.089619] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.089638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.089650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.098695] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.098714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.422 [2024-02-13 08:29:45.107199] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.422 [2024-02-13 08:29:45.107219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.422 [2024-02-13 08:29:45.107228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.115345] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.115365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.115373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.124123] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.124150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.132432] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.132452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.132460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.140720] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.140739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.140747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.149621] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.149640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.149653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.157639] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.157664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.166544] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.166563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.166571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.176901] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.176920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.176928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.184280] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.184299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.184307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.195728] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.195748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.195755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.207970] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.207989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.207997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.217660] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.217678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.217686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.226269] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.226289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.226301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.235269] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.235290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.235298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.243716] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.243736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.243743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.252586] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.252605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.252613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.262169] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.262187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.262196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.270249] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.683 [2024-02-13 08:29:45.270267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-02-13 08:29:45.270275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-02-13 08:29:45.283289] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.283308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.283315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.293877] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.293896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.293904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.304742] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.304761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.304769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.312601] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.312623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.312631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.320906] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.320924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.320932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.329643] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.329667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.329675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.338509] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.338528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.338537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.347333] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.347352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.347360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.355243] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.355262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.355270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-02-13 08:29:45.366298] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.684 [2024-02-13 08:29:45.366317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-02-13 08:29:45.366326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.375356] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.944 [2024-02-13 08:29:45.375376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-02-13 08:29:45.375384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.384109] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.944 [2024-02-13 08:29:45.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-02-13 08:29:45.384136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.392264] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.944 [2024-02-13 08:29:45.392283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-02-13 08:29:45.392291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.400289] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.944 [2024-02-13 08:29:45.400309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-02-13 08:29:45.400318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.409225] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.944 [2024-02-13 08:29:45.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-02-13 08:29:45.409252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-02-13 08:29:45.417439] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.417458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.417466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.425904] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.425922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.425930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.434048] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.434066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.434075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.443137] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.443156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.443164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.453134] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.453154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.453162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.460760] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.460779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.460789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.469538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.469557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.478504] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.478521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.478529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.486999] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.487017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.487025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.496081] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.496100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.496108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.505175] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.505194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.505202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.513129] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.513148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.513156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.521297] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.521316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.521323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.530328] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.530347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.530354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.538515] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.538536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.538544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.546740] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.546759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.555493] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.555519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.564550] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.564570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.564578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.572135] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.572154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.572162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.582707] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.582726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.582734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.593825] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.593844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.593851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.603376] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.603395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.603402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.613714] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.613733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.613740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-02-13 08:29:45.621548] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:11.945 [2024-02-13 08:29:45.621567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-02-13 08:29:45.621575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.631606] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.631626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.631635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.639055] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.639074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.639082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.648483] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.648511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.656973] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.656992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.657000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.666832] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.666851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.666860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.674979] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.674998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.675006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.684984] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.685004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.685012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.693295] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.693313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.702224] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.702242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.702250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.711179] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.711198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.711206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.719094] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.206 [2024-02-13 08:29:45.719113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-02-13 08:29:45.719120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-02-13 08:29:45.729416] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.729436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.729444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.737332] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.737351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.737359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.747126] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.747146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.747153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.754849] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.754867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.754875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.762794] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.762812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.774977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.774996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.775004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.783157] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.783176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.783184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.793410] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.793429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.793438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.801680] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.801699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.801707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.810145] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.810164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.810172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.818507] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.818527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.818535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.827242] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.827269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.836605] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.836624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.836632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.843972] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.843991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.844002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.853358] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.853377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.853385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.862465] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.862483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.862491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.870261] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.870279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.870287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.879034] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.879053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.879061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.207 [2024-02-13 08:29:45.888514] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.207 [2024-02-13 08:29:45.888533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.207 [2024-02-13 08:29:45.888541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.896201] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.896220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.896228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.906392] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.914356] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.914375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.914382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.922925] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.922947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.922956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.931170] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.931189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.931197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.940457] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.940476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.940484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.948043] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.948061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.959477] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.959495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.959504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.968956] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.968982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.977205] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.977223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.977231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.985279] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.985298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.985305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:45.993761] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:45.993780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:45.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.002822] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.002841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.002849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.010661] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.010689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.018995] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.019014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.019022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.027386] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.027406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.027414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.035969] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.035996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.044872] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.044892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.044901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.053235] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.053254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.053262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.468 [2024-02-13 08:29:46.061386] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.468 [2024-02-13 08:29:46.061405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.468 [2024-02-13 08:29:46.061413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.070318] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.070337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.078569] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.078587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.078595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.086998] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.087017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.087025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.095483] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.095501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.095509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.103785] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.103804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.103812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.112292] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.112311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.112319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.120985] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.121004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.121011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.129077] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.129095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.129103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.137286] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.137304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.137312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.469 [2024-02-13 08:29:46.145791] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.469 [2024-02-13 08:29:46.145815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.469 [2024-02-13 08:29:46.145823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.728 [2024-02-13 08:29:46.154318] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.728 [2024-02-13 08:29:46.154337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.728 [2024-02-13 08:29:46.154346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.728 [2024-02-13 08:29:46.163095] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.728 [2024-02-13 08:29:46.163115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.728 [2024-02-13 08:29:46.163123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.728 [2024-02-13 08:29:46.171001] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb86080) 00:29:12.728 [2024-02-13 08:29:46.171021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.728 [2024-02-13 08:29:46.171029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.728 00:29:12.728 Latency(us) 00:29:12.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.728 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.728 nvme0n1 : 2.04 28741.59 112.27 0.00 0.00 4361.92 1825.65 44689.31 00:29:12.728 =================================================================================================================== 00:29:12.728 Total : 28741.59 112.27 0.00 0.00 4361.92 1825.65 44689.31 00:29:12.728 0 00:29:12.728 08:29:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:12.728 08:29:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:12.728 08:29:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:12.728 | .driver_specific 00:29:12.728 | .nvme_error 00:29:12.728 | .status_code 00:29:12.728 | .command_transient_transport_error' 00:29:12.728 08:29:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.728 08:29:46 -- host/digest.sh@71 -- # (( 230 > 0 )) 00:29:12.728 08:29:46 -- host/digest.sh@73 -- # killprocess 2431467 00:29:12.728 08:29:46 -- common/autotest_common.sh@924 -- # '[' -z 2431467 ']' 00:29:12.728 08:29:46 -- common/autotest_common.sh@928 -- # kill -0 2431467 00:29:12.728 08:29:46 -- common/autotest_common.sh@929 -- # uname 00:29:12.728 08:29:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:12.728 08:29:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2431467 00:29:12.988 08:29:46 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:12.988 08:29:46 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:12.988 08:29:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2431467' 00:29:12.988 killing process with pid 2431467 00:29:12.988 08:29:46 -- common/autotest_common.sh@943 -- # kill 2431467 00:29:12.988 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.988 00:29:12.988 Latency(us) 00:29:12.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.988 =================================================================================================================== 00:29:12.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.988 08:29:46 -- common/autotest_common.sh@948 -- # wait 2431467 00:29:12.988 08:29:46 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:12.988 08:29:46 -- host/digest.sh@54 -- # local rw bs qd 00:29:12.988 08:29:46 -- host/digest.sh@56 -- # rw=randread 00:29:12.988 08:29:46 -- host/digest.sh@56 -- # bs=131072 00:29:12.988 08:29:46 -- host/digest.sh@56 -- # qd=16 00:29:12.988 08:29:46 -- host/digest.sh@58 -- # bperfpid=2432168 00:29:12.988 08:29:46 -- host/digest.sh@60 -- # waitforlisten 2432168 /var/tmp/bperf.sock 00:29:12.988 08:29:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:12.988 08:29:46 -- common/autotest_common.sh@817 -- # '[' -z 2432168 ']' 00:29:12.988 08:29:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.988 08:29:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:12.988 08:29:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.988 08:29:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:12.988 08:29:46 -- common/autotest_common.sh@10 -- # set +x 00:29:13.248 [2024-02-13 08:29:46.689621] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:13.248 [2024-02-13 08:29:46.689677] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432168 ] 00:29:13.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.248 Zero copy mechanism will not be used. 00:29:13.248 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.248 [2024-02-13 08:29:46.749019] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.248 [2024-02-13 08:29:46.825004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.815 08:29:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:13.815 08:29:47 -- common/autotest_common.sh@850 -- # return 0 00:29:13.815 08:29:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.815 08:29:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.074 08:29:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:14.074 08:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.074 08:29:47 -- common/autotest_common.sh@10 -- # set +x 00:29:14.074 08:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.074 08:29:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.074 08:29:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.641 nvme0n1 00:29:14.641 08:29:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:14.641 08:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.642 08:29:48 -- common/autotest_common.sh@10 -- # set +x 00:29:14.642 08:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.642 08:29:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:14.642 08:29:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.642 Zero copy mechanism will not be used. 00:29:14.642 Running I/O for 2 seconds... 00:29:14.642 [2024-02-13 08:29:48.217251] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.230330] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.230354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.241010] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.241032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.241041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.251988] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.252009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.252018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.263056] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.263077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.263085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.273819] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.273839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.273847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.283662] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.283689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.283698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.293495] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.293516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.293524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.304686] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.304707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.304716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.315763] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.315788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.315802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.642 [2024-02-13 08:29:48.327704] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.642 [2024-02-13 08:29:48.327726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.642 [2024-02-13 08:29:48.327735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.340051] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.340080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.351697] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.351717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.351726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.363148] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.363169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.363177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.374922] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.374942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.374950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.386066] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.386087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.386095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.396165] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.396184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.396192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.405847] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.405868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.405876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.415389] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.415409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.415418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.425585] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.425609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.425617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.437714] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.437735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.437743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.446307] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.446328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.446336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.456451] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.456472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.456481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.466828] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.466848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.466857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.476847] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.476868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.476876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.486977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.486997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.487005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.496983] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.497002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.497013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.507139] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.902 [2024-02-13 08:29:48.507159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.902 [2024-02-13 08:29:48.507168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.902 [2024-02-13 08:29:48.516558] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.516578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.516586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.525597] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.525616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.534916] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.534936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.534944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.544348] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.544368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.544376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.554788] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.554808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.554817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.565031] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.565051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.565060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.576059] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.576079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.576087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.903 [2024-02-13 08:29:48.585789] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:14.903 [2024-02-13 08:29:48.585813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.903 [2024-02-13 08:29:48.585822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.162 [2024-02-13 08:29:48.596094] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.162 [2024-02-13 08:29:48.596114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-02-13 08:29:48.596123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.162 [2024-02-13 08:29:48.607233] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.162 [2024-02-13 08:29:48.607253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-02-13 08:29:48.607261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.162 [2024-02-13 08:29:48.618517] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.618536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.628099] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.628123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.628131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.637223] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.637243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.637251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.646374] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.646395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.646403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.656010] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.656032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.656040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.664342] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.664362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.664370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.672613] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.672632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.672639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.681044] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.681064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.681071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.689480] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.689499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.689507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.697752] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.697771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.697779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.706174] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.706193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.706201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.714451] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.714471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.714479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.722783] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.722810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.731021] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.731040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.731047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.739368] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.739391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.739398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.747659] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.747678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.747685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.755950] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.755969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.755976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.764239] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.764258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.764266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.772560] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.772579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.772586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.780854] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.780874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.780881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.789163] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.789182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.797403] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.797421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.797428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.805783] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.805802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.805810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.814077] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.814096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.814104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.822490] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.822509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.822517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.163 [2024-02-13 08:29:48.830784] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.163 [2024-02-13 08:29:48.830803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-02-13 08:29:48.830810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.164 [2024-02-13 08:29:48.839279] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.164 [2024-02-13 08:29:48.839298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.164 [2024-02-13 08:29:48.839306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.164 [2024-02-13 08:29:48.847625] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.164 [2024-02-13 08:29:48.847643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.164 [2024-02-13 08:29:48.847658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.856028] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.426 [2024-02-13 08:29:48.856047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.426 [2024-02-13 08:29:48.856054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.864286] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.426 [2024-02-13 08:29:48.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.426 [2024-02-13 08:29:48.864312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.872607] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.426 [2024-02-13 08:29:48.872626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.426 [2024-02-13 08:29:48.872633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.880977] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.426 [2024-02-13 08:29:48.880996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.426 [2024-02-13 08:29:48.881007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.889214] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.426 [2024-02-13 08:29:48.889232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.426 [2024-02-13 08:29:48.889239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.426 [2024-02-13 08:29:48.897587] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.897606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.897613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.905935] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.905954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.905961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.914289] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.914307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.922568] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.922587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.922594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.930901] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.930920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.930928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.939206] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.939225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.939233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.947547] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.947565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.947572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.955826] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.955847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.955854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.964069] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.964087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.964094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.972328] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.972346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.972354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.980664] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.980683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.980690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.989315] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.989334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.989342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:48.997713] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:48.997731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:48.997739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.006140] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.006159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.006166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.014440] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.014460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.014469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.022798] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.022819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.031108] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.031127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.031135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.039423] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.039451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.047794] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.047813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.047821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.056095] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.056115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.056123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.064772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.064792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.073172] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.073192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.073199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.081955] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.081974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.081982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.090538] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.090557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.090565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.099237] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.099263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.099271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.427 [2024-02-13 08:29:49.108320] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.427 [2024-02-13 08:29:49.108340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.427 [2024-02-13 08:29:49.108348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.117699] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.117718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.117726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.127361] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.127382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.127390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.136963] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.136984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.136992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.145983] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.146004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.146013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.155322] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.155351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.164015] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.164041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.173017] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.173036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.181913] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.181933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.181941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.191764] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.191785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.191794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.202412] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.202432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.202440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.213298] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.213318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.213327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.223781] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.223802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.223810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.234536] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.234556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.234564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.244818] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.244840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.244848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.255406] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.255427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.255435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.266696] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.266717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.266729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.732 [2024-02-13 08:29:49.277793] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.732 [2024-02-13 08:29:49.277813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.732 [2024-02-13 08:29:49.277821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.287502] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.287522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.287530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.298476] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.298497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.298505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.309290] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.309310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.319287] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.319308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.319316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.330458] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.330479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.330487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.341149] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.341169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.341177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.352292] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.352321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.362233] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.362398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.362406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.371441] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.371461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.371470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.380511] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.380531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.380538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.733 [2024-02-13 08:29:49.389378] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.733 [2024-02-13 08:29:49.389399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.733 [2024-02-13 08:29:49.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.993 [2024-02-13 08:29:49.398378] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.993 [2024-02-13 08:29:49.398399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.993 [2024-02-13 08:29:49.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.993 [2024-02-13 08:29:49.407412] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.993 [2024-02-13 08:29:49.407432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.993 [2024-02-13 08:29:49.407440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.993 [2024-02-13 08:29:49.417158] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.993 [2024-02-13 08:29:49.417178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.993 [2024-02-13 08:29:49.417186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.993 [2024-02-13 08:29:49.426792] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.993 [2024-02-13 08:29:49.426813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.426821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.436513] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.436534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.436545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.446627] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.446651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.446659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.455261] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.455279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.455287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.464395] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.464414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.464422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.480772] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.480791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.480799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.493872] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.493892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.493899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.505469] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.505489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.505497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.523975] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.524003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.540469] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.540489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.540497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.557115] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.557138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.557147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.570210] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.570229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.570237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.579997] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.580015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.580023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.589978] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.589998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.590006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.600114] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.600135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.600143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.608842] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.608862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.608869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.617426] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.617445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.617453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.628269] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.628288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.628296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.645588] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.645607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.645615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.660463] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.660483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.660491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.669806] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.669825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.669833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:15.994 [2024-02-13 08:29:49.678290] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:15.994 [2024-02-13 08:29:49.678309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.994 [2024-02-13 08:29:49.678317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.686890] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.686909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.686916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.703137] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.703156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.703164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.715278] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.715305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.724638] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.724664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.724672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.733510] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.733530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.733537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.742804] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.742824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.742836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.751730] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.751759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.760973] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.760993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.761002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.772814] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.772834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.772842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.788835] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.788854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.788862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.804600] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.804620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.804628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.818488] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.818508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.829843] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.829864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.255 [2024-02-13 08:29:49.829872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.255 [2024-02-13 08:29:49.841029] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.255 [2024-02-13 08:29:49.841049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.841057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.854745] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.854768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.854776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.871511] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.871531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.883375] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.883394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.883402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.893714] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.893741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.902720] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.902739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.902747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.911805] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.911824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.911831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.920775] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.920794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.920801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.256 [2024-02-13 08:29:49.929805] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.256 [2024-02-13 08:29:49.929823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.256 [2024-02-13 08:29:49.929831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:49.946639] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:49.946666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:49.946677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:49.958631] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:49.958654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:49.958662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:49.970095] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:49.970114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:49.970122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:49.982196] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:49.982216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:49.982224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:49.994000] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:49.994020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:49.994029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.005313] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.005334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.005343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.017103] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.017126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.017135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.032266] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.032287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.032296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.047820] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.047843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.047853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.060481] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.060507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.070879] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.070899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.070908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.080377] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.080397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.080406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.090263] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.090317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.090333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.099330] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.099350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.099359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.114735] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.114755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.114764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.129729] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.129750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.129759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.142449] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.142469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.142478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.155073] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.155094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.166100] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.166121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.166129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.517 [2024-02-13 08:29:50.184508] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.517 [2024-02-13 08:29:50.184529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.517 [2024-02-13 08:29:50.184537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.776 [2024-02-13 08:29:50.240021] nvme_tcp.c:1389:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1720a20) 00:29:16.776 [2024-02-13 08:29:50.240041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.776 [2024-02-13 08:29:50.240049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.776 00:29:16.776 Latency(us) 00:29:16.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.776 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:16.776 nvme0n1 : 2.05 2921.03 365.13 0.00 0.00 5369.97 3838.54 49183.21 00:29:16.777 =================================================================================================================== 00:29:16.777 Total : 2921.03 365.13 0.00 0.00 5369.97 3838.54 49183.21 00:29:16.777 0 00:29:16.777 08:29:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:16.777 08:29:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:16.777 08:29:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:16.777 | .driver_specific 00:29:16.777 | .nvme_error 00:29:16.777 | .status_code 00:29:16.777 | .command_transient_transport_error' 00:29:16.777 08:29:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.777 08:29:50 -- host/digest.sh@71 -- # (( 193 > 0 )) 00:29:16.777 08:29:50 -- host/digest.sh@73 -- # killprocess 2432168 00:29:16.777 08:29:50 -- common/autotest_common.sh@924 -- # '[' -z 2432168 ']' 00:29:16.777 08:29:50 -- common/autotest_common.sh@928 -- # kill -0 2432168 00:29:16.777 08:29:50 -- common/autotest_common.sh@929 -- # uname 00:29:16.777 08:29:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:16.777 08:29:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2432168 00:29:17.036 08:29:50 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:17.036 08:29:50 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:17.036 08:29:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2432168' 00:29:17.036 killing process with pid 2432168 00:29:17.036 08:29:50 -- common/autotest_common.sh@943 -- # kill 2432168 00:29:17.036 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.036 00:29:17.036 Latency(us) 00:29:17.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.037 =================================================================================================================== 00:29:17.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.037 08:29:50 -- common/autotest_common.sh@948 -- # wait 2432168 00:29:17.037 08:29:50 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:17.037 08:29:50 -- host/digest.sh@54 -- # local rw bs qd 00:29:17.037 08:29:50 -- host/digest.sh@56 -- # rw=randwrite 00:29:17.037 08:29:50 -- host/digest.sh@56 -- # bs=4096 00:29:17.037 08:29:50 -- host/digest.sh@56 -- # qd=128 00:29:17.037 08:29:50 -- host/digest.sh@58 -- # bperfpid=2432777 00:29:17.037 08:29:50 -- host/digest.sh@60 -- # waitforlisten 2432777 /var/tmp/bperf.sock 00:29:17.037 08:29:50 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:17.037 08:29:50 -- common/autotest_common.sh@817 -- # '[' -z 2432777 ']' 00:29:17.037 08:29:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.037 08:29:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:17.037 08:29:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.037 08:29:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:17.037 08:29:50 -- common/autotest_common.sh@10 -- # set +x 00:29:17.296 [2024-02-13 08:29:50.727323] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:17.296 [2024-02-13 08:29:50.727372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2432777 ] 00:29:17.296 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.296 [2024-02-13 08:29:50.786917] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.296 [2024-02-13 08:29:50.863025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.864 08:29:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:17.864 08:29:51 -- common/autotest_common.sh@850 -- # return 0 00:29:17.864 08:29:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:17.864 08:29:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.124 08:29:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.124 08:29:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.124 08:29:51 -- common/autotest_common.sh@10 -- # set +x 00:29:18.124 08:29:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.124 08:29:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.124 08:29:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.383 nvme0n1 00:29:18.383 08:29:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:18.383 08:29:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.383 08:29:52 -- common/autotest_common.sh@10 -- # set +x 00:29:18.383 08:29:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.383 08:29:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.383 08:29:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.643 Running I/O for 2 seconds... 00:29:18.643 [2024-02-13 08:29:52.170328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fcdd0 00:29:18.643 [2024-02-13 08:29:52.171261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.171290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.181060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f1ca0 00:29:18.643 [2024-02-13 08:29:52.182187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.182208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.188541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f7da8 00:29:18.643 [2024-02-13 08:29:52.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.197280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190eff18 00:29:18.643 [2024-02-13 08:29:52.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.197891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.205943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f7538 00:29:18.643 [2024-02-13 08:29:52.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.206582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.214605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fb048 00:29:18.643 [2024-02-13 08:29:52.215307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.215325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.223264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f35f0 00:29:18.643 [2024-02-13 08:29:52.224075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.224094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.231933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f7100 00:29:18.643 [2024-02-13 08:29:52.232326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.232344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.241796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190ef270 00:29:18.643 [2024-02-13 08:29:52.242891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.242909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.249577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e7c50 00:29:18.643 [2024-02-13 08:29:52.250374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.250392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.258266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190eaab8 00:29:18.643 [2024-02-13 08:29:52.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.259377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.266847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fd640 00:29:18.643 [2024-02-13 08:29:52.267812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.267830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.275492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e7818 00:29:18.643 [2024-02-13 08:29:52.276542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.276561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.284104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190ea248 00:29:18.643 [2024-02-13 08:29:52.285152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.285171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.292770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f7538 00:29:18.643 [2024-02-13 08:29:52.294091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.294110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.301038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f81e0 00:29:18.643 [2024-02-13 08:29:52.301863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.301881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.309722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fef90 00:29:18.643 [2024-02-13 08:29:52.310591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.310608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.318353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f5be8 00:29:18.643 [2024-02-13 08:29:52.319254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.319272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:18.643 [2024-02-13 08:29:52.327138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f57b0 00:29:18.643 [2024-02-13 08:29:52.327869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.643 [2024-02-13 08:29:52.327887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:18.903 [2024-02-13 08:29:52.335726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fac10 00:29:18.903 [2024-02-13 08:29:52.336193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.903 [2024-02-13 08:29:52.336211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:18.903 [2024-02-13 08:29:52.344349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f57b0 00:29:18.903 [2024-02-13 08:29:52.344741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.903 [2024-02-13 08:29:52.344759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:18.903 [2024-02-13 08:29:52.352953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e8d30 00:29:18.903 [2024-02-13 08:29:52.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.903 [2024-02-13 08:29:52.353306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.361620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e9e10 00:29:18.904 [2024-02-13 08:29:52.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.361963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.371721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f96f8 00:29:18.904 [2024-02-13 08:29:52.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.373041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.380323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fd640 00:29:18.904 [2024-02-13 08:29:52.381621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.381639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.388876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f20d8 00:29:18.904 [2024-02-13 08:29:52.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.390172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.397472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f6890 00:29:18.904 [2024-02-13 08:29:52.398786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.406054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e9168 00:29:18.904 [2024-02-13 08:29:52.407252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.407269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.414682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190fac10 00:29:18.904 [2024-02-13 08:29:52.415942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.415960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.422217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190ebb98 00:29:18.904 [2024-02-13 08:29:52.423359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.423377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.430939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f2510 00:29:18.904 [2024-02-13 08:29:52.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.431978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.439478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e84c0 00:29:18.904 [2024-02-13 08:29:52.440568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.440586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.449000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e84c0 00:29:18.904 [2024-02-13 08:29:52.449829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.449846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.457285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e6b70 00:29:18.904 [2024-02-13 08:29:52.458883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.458900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.465535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190e6fa8 00:29:18.904 [2024-02-13 08:29:52.466272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.466290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.474097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f0350 00:29:18.904 [2024-02-13 08:29:52.475107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.475125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.482574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190feb58 00:29:18.904 [2024-02-13 08:29:52.483551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.483571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.491717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190eee38 00:29:18.904 [2024-02-13 08:29:52.492797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.492815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.500778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.501084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.501101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.509792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.510096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.510114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.518765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.519008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.519026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.527739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.527991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.536719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.536949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.536967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.545733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.545973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.545991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.554765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.555011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.555029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.563806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.564061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.564082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.572794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.573036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.573054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:18.904 [2024-02-13 08:29:52.581772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:18.904 [2024-02-13 08:29:52.582016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.904 [2024-02-13 08:29:52.582034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.590933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.591183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.591202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.600089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.600332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.600350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.609259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.609525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.609544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.618279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.618522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.618540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.627280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.627523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.627541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.636241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.636486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.645205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.645448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.645466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.654238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.654477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.654494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.663200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.663446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.663464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.672179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.672423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.672441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.681312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.681556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.681573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.690393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.690642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.690665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.699373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.699617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.699635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.708337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.708580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.708598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.717374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.717617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.726333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.726569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.726586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.165 [2024-02-13 08:29:52.735298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.165 [2024-02-13 08:29:52.735535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.165 [2024-02-13 08:29:52.735554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.744287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.744526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.744544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.753306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.753544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.753561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.762310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.762549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.762566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.771277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.771519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.771537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.780239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.780483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.780502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.789261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.789521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.798160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.798425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.807362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.807610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.807628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.816553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.816820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.825798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.826045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.826064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.835060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.835305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.835322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.166 [2024-02-13 08:29:52.844165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.166 [2024-02-13 08:29:52.844409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.166 [2024-02-13 08:29:52.844427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.853379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.853624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.853641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.862566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.862812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.862830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.871540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.871782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.871800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.880473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.880723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.880740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.889484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.889743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.889761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.898519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.898759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.898777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.907441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.907687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.907704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.916477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.916726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.916743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.925458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.925718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.934577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.934822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.426 [2024-02-13 08:29:52.934840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.426 [2024-02-13 08:29:52.943626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.426 [2024-02-13 08:29:52.943880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.943898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.952639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.952888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.952906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.961626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.961876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.961894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.970644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.970889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.970907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.979613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.979858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.979876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.988663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.988907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.988925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:52.997744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:52.997989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:52.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.006739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.006984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.007002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.015779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.016042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.024750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.024995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.025012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.033978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.034226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.034247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.043131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.043380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.043398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.052318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.052567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.061435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.061699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.061717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.070475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.070717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.070735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.079461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.079703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.079721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.088432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.088675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.088693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.097538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.097808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.097826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.427 [2024-02-13 08:29:53.106697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.427 [2024-02-13 08:29:53.106942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.427 [2024-02-13 08:29:53.106959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.687 [2024-02-13 08:29:53.115933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.687 [2024-02-13 08:29:53.116183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.687 [2024-02-13 08:29:53.116201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.687 [2024-02-13 08:29:53.125023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.687 [2024-02-13 08:29:53.125267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.687 [2024-02-13 08:29:53.125284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.687 [2024-02-13 08:29:53.134007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.687 [2024-02-13 08:29:53.134251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.687 [2024-02-13 08:29:53.134269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.687 [2024-02-13 08:29:53.142996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.143257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.152021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.152278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.160985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.161222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.161240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.170003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.170249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.170266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.178989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.179227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.179245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.188104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.188351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.188369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.197126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.197371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.197389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.206085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.206330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.206348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.215144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.215387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.215405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.224146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.224395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.233142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.233379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.233397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.242133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.242376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.242393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.251131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.251378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.251396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.260054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.260304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.260322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.269037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.269296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.277910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.278148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.278166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.286875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.295891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.296129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.296146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.304871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.305118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.305136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.313934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.314160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.314177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.322948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.323188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.323206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.331959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.332218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.340928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.341169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.341188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.349984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.350233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.350254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.359295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.359538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.359555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.688 [2024-02-13 08:29:53.368381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.688 [2024-02-13 08:29:53.368622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.688 [2024-02-13 08:29:53.368640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.377667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.377921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.377939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.386802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.387034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.387052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.395880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.396119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.396137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.404871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.405108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.405127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.413893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.414132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.422869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.423109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.423127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.431868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.432117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.432135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.441025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.441260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.441278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.449999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.450243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.450262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.459000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.459238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.459256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.468030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.468272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.468289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.476985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.477229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.477247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.485984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.486220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.486238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.494997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.495236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.495254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.503965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.504203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.504221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.512974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.513220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.521960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.522204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.522221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.530938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.531177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.531195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.539919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.540158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.540177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.949 [2024-02-13 08:29:53.548880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.949 [2024-02-13 08:29:53.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.949 [2024-02-13 08:29:53.549135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.557849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.558093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.558110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.566838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.567082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.567099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.575820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.576066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.576084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.584870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.585115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.585136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.593872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.594116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.602873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.603108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.611921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.612174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.612191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.620937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.621183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.621201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:19.950 [2024-02-13 08:29:53.629916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:19.950 [2024-02-13 08:29:53.630165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:19.950 [2024-02-13 08:29:53.630183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.639369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.639620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.639639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.648419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.648673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.648691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.657464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.657713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.666439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.666684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.666702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.675420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.675661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.675679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.684396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.684641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.693540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.693786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.693804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.702497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.702822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.702841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.711491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.711736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.711755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.720547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.720789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.720807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.729479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.729746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.738488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.738752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.747466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.747711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.747729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.756394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.756772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.765471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.765727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.765745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.774455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.774692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.774710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.783446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.210 [2024-02-13 08:29:53.783701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.210 [2024-02-13 08:29:53.783719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.210 [2024-02-13 08:29:53.792445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.792697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.792715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.801425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.801664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.801683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.810423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.810731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.810749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.819465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.819766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.819783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.828505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.828797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.828814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.837482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.837733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.846574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.846807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.846825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.855590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.855896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.855915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.864549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.864826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.873592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.873877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.873894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.882554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.882902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.882920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.211 [2024-02-13 08:29:53.891547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.211 [2024-02-13 08:29:53.891779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.211 [2024-02-13 08:29:53.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.900785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.901044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.901065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.909829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.910064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.910082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.918849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.919080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.919098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.927810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.928113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.936790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.937100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.937118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.945951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.946165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.946183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.471 [2024-02-13 08:29:53.954953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.471 [2024-02-13 08:29:53.955270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.471 [2024-02-13 08:29:53.955287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:53.963924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:53.964137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:53.964154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:53.972969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:53.973256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:53.973274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:53.981941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:53.982193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:53.982211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:53.990913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:53.991211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:53.991229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:53.999932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.000234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.008951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.009253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.009271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.017931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.018225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.027153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.027452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.027471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.036146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.036411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.045115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.045361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.045378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.054104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.054376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.054393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.063072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.063294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.063311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.072052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.072302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.072320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.080906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.081163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.089890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.090127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.090144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.098895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.099142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.099160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.107898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.108142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.108160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.116967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.117259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.117277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.126150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.126440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.126458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.135228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.135478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.135498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.144384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.144623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.472 [2024-02-13 08:29:54.153576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6dbc0) with pdu=0x2000190f4b08 00:29:20.472 [2024-02-13 08:29:54.153789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.472 [2024-02-13 08:29:54.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:20.732 00:29:20.732 Latency(us) 00:29:20.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.732 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.732 nvme0n1 : 2.00 28277.79 110.46 0.00 0.00 4518.89 2200.14 16852.11 00:29:20.732 =================================================================================================================== 00:29:20.732 Total : 28277.79 110.46 0.00 0.00 4518.89 2200.14 16852.11 00:29:20.732 0 00:29:20.732 08:29:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:20.732 08:29:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:20.732 08:29:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:20.732 | .driver_specific 00:29:20.732 | .nvme_error 00:29:20.732 | .status_code 00:29:20.732 | .command_transient_transport_error' 00:29:20.732 08:29:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:20.732 08:29:54 -- host/digest.sh@71 -- # (( 222 > 0 )) 00:29:20.732 08:29:54 -- host/digest.sh@73 -- # killprocess 2432777 00:29:20.732 08:29:54 -- common/autotest_common.sh@924 -- # '[' -z 2432777 ']' 00:29:20.732 08:29:54 -- common/autotest_common.sh@928 -- # kill -0 2432777 00:29:20.732 08:29:54 -- common/autotest_common.sh@929 -- # uname 00:29:20.732 08:29:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:20.732 08:29:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2432777 00:29:20.732 08:29:54 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:20.732 08:29:54 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:20.732 08:29:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2432777' 00:29:20.732 killing process with pid 2432777 00:29:20.732 08:29:54 -- common/autotest_common.sh@943 -- # kill 2432777 00:29:20.732 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.732 00:29:20.732 Latency(us) 00:29:20.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.732 =================================================================================================================== 00:29:20.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.732 08:29:54 -- common/autotest_common.sh@948 -- # wait 2432777 00:29:20.992 08:29:54 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:20.992 08:29:54 -- host/digest.sh@54 -- # local rw bs qd 00:29:20.992 08:29:54 -- host/digest.sh@56 -- # rw=randwrite 00:29:20.992 08:29:54 -- host/digest.sh@56 -- # bs=131072 00:29:20.992 08:29:54 -- host/digest.sh@56 -- # qd=16 00:29:20.992 08:29:54 -- host/digest.sh@58 -- # bperfpid=2433353 00:29:20.992 08:29:54 -- host/digest.sh@60 -- # waitforlisten 2433353 /var/tmp/bperf.sock 00:29:20.992 08:29:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:20.992 08:29:54 -- common/autotest_common.sh@817 -- # '[' -z 2433353 ']' 00:29:20.992 08:29:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.992 08:29:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:20.992 08:29:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.992 08:29:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:20.992 08:29:54 -- common/autotest_common.sh@10 -- # set +x 00:29:20.992 [2024-02-13 08:29:54.636270] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:20.992 [2024-02-13 08:29:54.636315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433353 ] 00:29:20.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.992 Zero copy mechanism will not be used. 00:29:20.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.251 [2024-02-13 08:29:54.694660] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.251 [2024-02-13 08:29:54.759804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.820 08:29:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:21.820 08:29:55 -- common/autotest_common.sh@850 -- # return 0 00:29:21.820 08:29:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.820 08:29:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.080 08:29:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.080 08:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.080 08:29:55 -- common/autotest_common.sh@10 -- # set +x 00:29:22.080 08:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.080 08:29:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.080 08:29:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.339 nvme0n1 00:29:22.339 08:29:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:22.339 08:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.339 08:29:55 -- common/autotest_common.sh@10 -- # set +x 00:29:22.339 08:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.339 08:29:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.339 08:29:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.629 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.629 Zero copy mechanism will not be used. 00:29:22.629 Running I/O for 2 seconds... 00:29:22.629 [2024-02-13 08:29:56.080554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.080833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.092452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.092674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.092698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.102666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.102893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.102915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.112402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.112716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.112738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.122446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.122622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.122642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.132492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.132785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.143427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.143681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.143701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.154480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.154841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.154860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.164488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.164843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.164862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.174262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.174407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.174425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.183689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.184138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.184157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.194198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.194543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.194562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.203462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.203690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.203710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.213048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.213295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.213314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.222637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.222997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.223016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.232006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.232386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.629 [2024-02-13 08:29:56.232405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.629 [2024-02-13 08:29:56.241093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.629 [2024-02-13 08:29:56.241475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.241494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.252584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.252993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.253012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.264740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.265098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.265116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.276627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.276986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.277008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.289221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.289588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.300556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.300784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.630 [2024-02-13 08:29:56.312104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.630 [2024-02-13 08:29:56.312490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.630 [2024-02-13 08:29:56.312510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.323416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.323815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.323834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.335496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.335680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.335699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.346301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.346589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.358513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.358752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.358770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.370976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.371305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.371323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.382816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.383100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.383119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.394012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.394233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.404956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.405215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.405233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.415999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.416322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.416340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.427359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.427749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.427769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.439142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.439345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.439364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.450666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.451060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.451078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.462370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.462678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.462696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.474359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.474735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.474755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.486765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.487030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.487048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.497993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.498404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.498422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.508937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.509360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.509378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.521137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.521526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.890 [2024-02-13 08:29:56.521546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.890 [2024-02-13 08:29:56.533180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.890 [2024-02-13 08:29:56.533504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.891 [2024-02-13 08:29:56.533522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.891 [2024-02-13 08:29:56.545348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.891 [2024-02-13 08:29:56.545665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.891 [2024-02-13 08:29:56.545684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.891 [2024-02-13 08:29:56.557748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.891 [2024-02-13 08:29:56.557923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.891 [2024-02-13 08:29:56.557941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.891 [2024-02-13 08:29:56.569460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:22.891 [2024-02-13 08:29:56.569775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.891 [2024-02-13 08:29:56.569793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.580531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.580819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.580842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.592215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.592497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.592515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.603320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.613977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.614241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.614259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.624292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.624580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.624599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.634896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.635112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.635130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.644952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.645235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.645252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.656225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.656396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.666950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.667131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.667149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.677242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.677600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.677618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.687979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.688343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.688361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.699111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.699582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.699600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.710384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.710584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.710602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.720664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.721061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.731906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.732191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.732210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.742028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.742356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.742374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.751321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.751642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.751665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.762972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.763319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.763336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.772102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.772361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.772378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.781934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.782252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.151 [2024-02-13 08:29:56.782269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.151 [2024-02-13 08:29:56.792246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.151 [2024-02-13 08:29:56.792589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.152 [2024-02-13 08:29:56.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.152 [2024-02-13 08:29:56.802994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.152 [2024-02-13 08:29:56.803388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.152 [2024-02-13 08:29:56.803406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.152 [2024-02-13 08:29:56.813292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.152 [2024-02-13 08:29:56.813619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.152 [2024-02-13 08:29:56.813637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.152 [2024-02-13 08:29:56.824388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.152 [2024-02-13 08:29:56.824620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.152 [2024-02-13 08:29:56.824639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.152 [2024-02-13 08:29:56.835656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.152 [2024-02-13 08:29:56.836020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.152 [2024-02-13 08:29:56.836038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.846766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.847045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.847063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.856671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.857012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.857034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.868400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.868660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.868678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.877553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.877767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.877785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.888282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.888487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.888505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.898712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.899102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.899120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.909105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.909502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.909520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.919553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.919824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.919841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.930130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.930404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.930423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.940405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.940666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.950507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.950806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.950825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.960260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.960588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.960606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.970111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.970473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.970491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.980218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.980633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.980657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.989868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:56.990181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:56.990199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:56.999945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:57.000200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:57.000219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:57.009784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:57.010158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:57.010177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:57.018726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:57.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.412 [2024-02-13 08:29:57.018949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.412 [2024-02-13 08:29:57.028433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.412 [2024-02-13 08:29:57.028719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.028738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.037794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.038033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.038053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.048533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.048862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.048880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.058053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.058278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.058296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.068971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.069165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.069183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.079868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.080007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.080024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.413 [2024-02-13 08:29:57.089268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.413 [2024-02-13 08:29:57.089598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.413 [2024-02-13 08:29:57.089616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.099012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.099343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.099361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.108078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.108410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.108428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.117936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.118293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.118315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.128315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.128651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.128670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.137793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.138047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.138066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.147868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.148146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.158174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.158489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.169376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.169662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.169680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.179684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.179952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.179970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.190547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.190810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.190827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.200675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.200941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.673 [2024-02-13 08:29:57.200959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.673 [2024-02-13 08:29:57.211707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.673 [2024-02-13 08:29:57.212038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.212055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.221851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.222136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.222154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.230941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.231114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.231132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.240579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.240827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.251044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.251408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.251426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.261387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.261669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.261687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.270821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.271096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.271114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.281053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.281275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.281293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.290276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.290490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.299464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.299674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.299692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.309891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.310106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.310123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.319391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.319612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.319630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.329338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.329660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.339186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.339430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.339448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.349143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.674 [2024-02-13 08:29:57.349544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.674 [2024-02-13 08:29:57.349563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.674 [2024-02-13 08:29:57.359338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.934 [2024-02-13 08:29:57.359623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.934 [2024-02-13 08:29:57.359641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.934 [2024-02-13 08:29:57.369613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.934 [2024-02-13 08:29:57.369896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.934 [2024-02-13 08:29:57.369916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.934 [2024-02-13 08:29:57.379593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.934 [2024-02-13 08:29:57.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.934 [2024-02-13 08:29:57.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.389697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.389894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.389912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.399502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.399740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.399758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.409657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.410071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.410089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.420088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.420427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.420445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.430287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.430741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.430758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.441175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.441387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.441406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.452113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.452492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.452511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.463712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.463984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.464002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.474522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.474791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.484306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.484523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.484541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.493659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.493978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.493996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.504230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.504537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.504556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.514712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.515132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.515149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.525248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.525470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.525488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.535146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.535422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.535439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.544569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.544840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.544858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.554974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.555373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.565370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.565673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.565691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.576308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.576597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.576614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.586726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.587036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.597710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.598125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.598143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.608567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.608840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.608859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.935 [2024-02-13 08:29:57.620015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:23.935 [2024-02-13 08:29:57.620391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.935 [2024-02-13 08:29:57.620411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.630784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.631088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.631108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.641780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.642021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.642040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.652692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.652999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.664571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.664853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.664881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.676067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.676456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.688001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.688270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.688288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.698922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.699221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.699239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.710711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.710997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.711015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.722830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.723198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.723216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.735233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.735482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.735500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.746397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.746836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.746855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.757760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.758033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.758052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.769093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.769276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.769295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.781042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.781412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.781431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.793120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.793472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.805469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.805865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.805883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.817475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.817760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.196 [2024-02-13 08:29:57.817778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.196 [2024-02-13 08:29:57.828284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.196 [2024-02-13 08:29:57.828556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.197 [2024-02-13 08:29:57.828573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.197 [2024-02-13 08:29:57.839645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.197 [2024-02-13 08:29:57.839887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.197 [2024-02-13 08:29:57.839905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.197 [2024-02-13 08:29:57.851719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.197 [2024-02-13 08:29:57.851974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.197 [2024-02-13 08:29:57.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.197 [2024-02-13 08:29:57.863782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.197 [2024-02-13 08:29:57.864140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.197 [2024-02-13 08:29:57.864160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.197 [2024-02-13 08:29:57.875119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.197 [2024-02-13 08:29:57.875547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.197 [2024-02-13 08:29:57.875566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.886962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.887324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.887343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.898770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.899079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.909466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.909729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.909747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.921778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.922167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.922185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.932846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.933220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.945704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.946054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.956817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.957014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.957032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.967929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.968211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.968230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.977474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.977776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:57.988848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:57.989155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:57.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.000790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.001184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.001203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.011783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.012067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.012085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.022512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.022694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.022711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.033527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.033974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.033992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.042626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.042858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.042876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.457 [2024-02-13 08:29:58.053218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb6df00) with pdu=0x2000190fef90 00:29:24.457 [2024-02-13 08:29:58.053507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.457 [2024-02-13 08:29:58.053525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.457 00:29:24.457 Latency(us) 00:29:24.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.458 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.458 nvme0n1 : 2.01 2863.84 357.98 0.00 0.00 5577.31 3635.69 25090.93 00:29:24.458 =================================================================================================================== 00:29:24.458 Total : 2863.84 357.98 0.00 0.00 5577.31 3635.69 25090.93 00:29:24.458 0 00:29:24.458 08:29:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.458 08:29:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.458 08:29:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.458 | .driver_specific 00:29:24.458 | .nvme_error 00:29:24.458 | .status_code 00:29:24.458 | .command_transient_transport_error' 00:29:24.458 08:29:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.717 08:29:58 -- host/digest.sh@71 -- # (( 185 > 0 )) 00:29:24.718 08:29:58 -- host/digest.sh@73 -- # killprocess 2433353 00:29:24.718 08:29:58 -- common/autotest_common.sh@924 -- # '[' -z 2433353 ']' 00:29:24.718 08:29:58 -- common/autotest_common.sh@928 -- # kill -0 2433353 00:29:24.718 08:29:58 -- common/autotest_common.sh@929 -- # uname 00:29:24.718 08:29:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:24.718 08:29:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2433353 00:29:24.718 08:29:58 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:24.718 08:29:58 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:24.718 08:29:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2433353' 00:29:24.718 killing process with pid 2433353 00:29:24.718 08:29:58 -- common/autotest_common.sh@943 -- # kill 2433353 00:29:24.718 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.718 00:29:24.718 Latency(us) 00:29:24.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.718 =================================================================================================================== 00:29:24.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.718 08:29:58 -- common/autotest_common.sh@948 -- # wait 2433353 00:29:24.977 08:29:58 -- host/digest.sh@115 -- # killprocess 2431225 00:29:24.977 08:29:58 -- common/autotest_common.sh@924 -- # '[' -z 2431225 ']' 00:29:24.977 08:29:58 -- common/autotest_common.sh@928 -- # kill -0 2431225 00:29:24.977 08:29:58 -- common/autotest_common.sh@929 -- # uname 00:29:24.977 08:29:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:24.977 08:29:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2431225 00:29:24.977 08:29:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:24.977 08:29:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:24.977 08:29:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2431225' 00:29:24.977 killing process with pid 2431225 00:29:24.977 08:29:58 -- common/autotest_common.sh@943 -- # kill 2431225 00:29:24.977 08:29:58 -- common/autotest_common.sh@948 -- # wait 2431225 00:29:25.237 00:29:25.237 real 0m16.985s 00:29:25.237 user 0m32.896s 00:29:25.237 sys 0m3.935s 00:29:25.237 08:29:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:25.237 08:29:58 -- common/autotest_common.sh@10 -- # set +x 00:29:25.237 ************************************ 00:29:25.237 END TEST nvmf_digest_error 00:29:25.237 ************************************ 00:29:25.237 08:29:58 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:25.237 08:29:58 -- host/digest.sh@139 -- # nvmftestfini 00:29:25.237 08:29:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:25.237 08:29:58 -- nvmf/common.sh@116 -- # sync 00:29:25.237 08:29:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:25.237 08:29:58 -- nvmf/common.sh@119 -- # set +e 00:29:25.237 08:29:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:25.237 08:29:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:25.237 rmmod nvme_tcp 00:29:25.237 rmmod nvme_fabrics 00:29:25.237 rmmod nvme_keyring 00:29:25.237 08:29:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.237 08:29:58 -- nvmf/common.sh@123 -- # set -e 00:29:25.237 08:29:58 -- nvmf/common.sh@124 -- # return 0 00:29:25.237 08:29:58 -- nvmf/common.sh@477 -- # '[' -n 2431225 ']' 00:29:25.237 08:29:58 -- nvmf/common.sh@478 -- # killprocess 2431225 00:29:25.237 08:29:58 -- common/autotest_common.sh@924 -- # '[' -z 2431225 ']' 00:29:25.237 08:29:58 -- common/autotest_common.sh@928 -- # kill -0 2431225 00:29:25.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2431225) - No such process 00:29:25.238 08:29:58 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2431225 is not found' 00:29:25.238 Process with pid 2431225 is not found 00:29:25.238 08:29:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.238 08:29:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.238 08:29:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.238 08:29:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.238 08:29:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.238 08:29:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.238 08:29:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.238 08:29:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.776 08:30:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:27.776 00:29:27.776 real 0m42.437s 00:29:27.776 user 1m7.219s 00:29:27.776 sys 0m12.761s 00:29:27.776 08:30:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.776 08:30:00 -- common/autotest_common.sh@10 -- # set +x 00:29:27.776 ************************************ 00:29:27.776 END TEST nvmf_digest 00:29:27.776 ************************************ 00:29:27.776 08:30:00 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:29:27.776 08:30:00 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:29:27.776 08:30:00 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:29:27.776 08:30:00 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:27.776 08:30:00 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:27.776 08:30:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:27.776 08:30:00 -- common/autotest_common.sh@10 -- # set +x 00:29:27.776 ************************************ 00:29:27.776 START TEST nvmf_bdevperf 00:29:27.776 ************************************ 00:29:27.776 08:30:00 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:27.776 * Looking for test storage... 00:29:27.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.776 08:30:01 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.776 08:30:01 -- nvmf/common.sh@7 -- # uname -s 00:29:27.776 08:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.776 08:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.776 08:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.776 08:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.776 08:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.776 08:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.776 08:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.776 08:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.776 08:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.776 08:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.776 08:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:27.776 08:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:27.776 08:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.776 08:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.776 08:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.776 08:30:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.776 08:30:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.776 08:30:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.776 08:30:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.776 08:30:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.776 08:30:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.776 08:30:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.776 08:30:01 -- paths/export.sh@5 -- # export PATH 00:29:27.776 08:30:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.776 08:30:01 -- nvmf/common.sh@46 -- # : 0 00:29:27.776 08:30:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:27.776 08:30:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:27.776 08:30:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:27.777 08:30:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.777 08:30:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.777 08:30:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:27.777 08:30:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:27.777 08:30:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:27.777 08:30:01 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:27.777 08:30:01 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:27.777 08:30:01 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:27.777 08:30:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:27.777 08:30:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.777 08:30:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:27.777 08:30:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:27.777 08:30:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:27.777 08:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.777 08:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.777 08:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.777 08:30:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:27.777 08:30:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:27.777 08:30:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:27.777 08:30:01 -- common/autotest_common.sh@10 -- # set +x 00:29:33.083 08:30:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:33.083 08:30:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:33.083 08:30:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:33.083 08:30:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:33.083 08:30:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:33.083 08:30:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:33.083 08:30:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:33.083 08:30:06 -- nvmf/common.sh@294 -- # net_devs=() 00:29:33.083 08:30:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:33.083 08:30:06 -- nvmf/common.sh@295 -- # e810=() 00:29:33.083 08:30:06 -- nvmf/common.sh@295 -- # local -ga e810 00:29:33.083 08:30:06 -- nvmf/common.sh@296 -- # x722=() 00:29:33.083 08:30:06 -- nvmf/common.sh@296 -- # local -ga x722 00:29:33.083 08:30:06 -- nvmf/common.sh@297 -- # mlx=() 00:29:33.083 08:30:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:33.083 08:30:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.083 08:30:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:33.083 08:30:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:33.083 08:30:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:33.083 08:30:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:33.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:33.083 08:30:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:33.083 08:30:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:33.083 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:33.083 08:30:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:33.083 08:30:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.083 08:30:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.083 08:30:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:33.083 Found net devices under 0000:af:00.0: cvl_0_0 00:29:33.083 08:30:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.083 08:30:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:33.083 08:30:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.083 08:30:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.083 08:30:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:33.083 Found net devices under 0000:af:00.1: cvl_0_1 00:29:33.083 08:30:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.083 08:30:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:33.083 08:30:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:33.083 08:30:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:33.083 08:30:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.083 08:30:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.083 08:30:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.083 08:30:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:33.083 08:30:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.083 08:30:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.083 08:30:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:33.083 08:30:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.083 08:30:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.083 08:30:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:33.083 08:30:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:33.083 08:30:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.083 08:30:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.343 08:30:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.343 08:30:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.343 08:30:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:33.343 08:30:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.343 08:30:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.343 08:30:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.343 08:30:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:33.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:29:33.343 00:29:33.343 --- 10.0.0.2 ping statistics --- 00:29:33.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.343 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:33.343 08:30:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:29:33.343 00:29:33.343 --- 10.0.0.1 ping statistics --- 00:29:33.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.343 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:29:33.343 08:30:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.343 08:30:06 -- nvmf/common.sh@410 -- # return 0 00:29:33.343 08:30:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:33.343 08:30:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.343 08:30:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:33.343 08:30:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:33.343 08:30:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.343 08:30:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:33.343 08:30:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:33.343 08:30:06 -- host/bdevperf.sh@25 -- # tgt_init 00:29:33.343 08:30:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.343 08:30:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:33.343 08:30:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:33.343 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:29:33.343 08:30:06 -- nvmf/common.sh@469 -- # nvmfpid=2437978 00:29:33.343 08:30:06 -- nvmf/common.sh@470 -- # waitforlisten 2437978 00:29:33.343 08:30:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.343 08:30:06 -- common/autotest_common.sh@817 -- # '[' -z 2437978 ']' 00:29:33.343 08:30:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.343 08:30:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:33.343 08:30:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.343 08:30:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:33.343 08:30:06 -- common/autotest_common.sh@10 -- # set +x 00:29:33.343 [2024-02-13 08:30:06.994861] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:33.344 [2024-02-13 08:30:06.994906] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.344 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.603 [2024-02-13 08:30:07.057151] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:33.603 [2024-02-13 08:30:07.133629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:33.603 [2024-02-13 08:30:07.133737] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.603 [2024-02-13 08:30:07.133746] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.603 [2024-02-13 08:30:07.133752] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.603 [2024-02-13 08:30:07.133932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.603 [2024-02-13 08:30:07.133999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.603 [2024-02-13 08:30:07.134000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.172 08:30:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:34.172 08:30:07 -- common/autotest_common.sh@850 -- # return 0 00:29:34.172 08:30:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:34.172 08:30:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:34.172 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.172 08:30:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.172 08:30:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.172 08:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.172 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.172 [2024-02-13 08:30:07.825599] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.172 08:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.172 08:30:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.172 08:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.172 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 Malloc0 00:29:34.432 08:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.432 08:30:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.432 08:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.432 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 08:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.432 08:30:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.432 08:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.432 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 08:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.432 08:30:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.432 08:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:34.432 08:30:07 -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 [2024-02-13 08:30:07.891119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.432 08:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:34.432 08:30:07 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:34.432 08:30:07 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:34.432 08:30:07 -- nvmf/common.sh@520 -- # config=() 00:29:34.432 08:30:07 -- nvmf/common.sh@520 -- # local subsystem config 00:29:34.432 08:30:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:34.432 08:30:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:34.432 { 00:29:34.432 "params": { 00:29:34.432 "name": "Nvme$subsystem", 00:29:34.432 "trtype": "$TEST_TRANSPORT", 00:29:34.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.432 "adrfam": "ipv4", 00:29:34.432 "trsvcid": "$NVMF_PORT", 00:29:34.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.432 "hdgst": ${hdgst:-false}, 00:29:34.432 "ddgst": ${ddgst:-false} 00:29:34.432 }, 00:29:34.432 "method": "bdev_nvme_attach_controller" 00:29:34.432 } 00:29:34.432 EOF 00:29:34.432 )") 00:29:34.432 08:30:07 -- nvmf/common.sh@542 -- # cat 00:29:34.432 08:30:07 -- nvmf/common.sh@544 -- # jq . 00:29:34.432 08:30:07 -- nvmf/common.sh@545 -- # IFS=, 00:29:34.432 08:30:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:34.432 "params": { 00:29:34.432 "name": "Nvme1", 00:29:34.432 "trtype": "tcp", 00:29:34.432 "traddr": "10.0.0.2", 00:29:34.432 "adrfam": "ipv4", 00:29:34.432 "trsvcid": "4420", 00:29:34.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.432 "hdgst": false, 00:29:34.432 "ddgst": false 00:29:34.432 }, 00:29:34.432 "method": "bdev_nvme_attach_controller" 00:29:34.432 }' 00:29:34.432 [2024-02-13 08:30:07.936030] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:34.432 [2024-02-13 08:30:07.936081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438161 ] 00:29:34.432 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.432 [2024-02-13 08:30:07.998889] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.432 [2024-02-13 08:30:08.069640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.432 [2024-02-13 08:30:08.069703] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:35.001 Running I/O for 1 seconds... 00:29:35.939 00:29:35.939 Latency(us) 00:29:35.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.939 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:35.939 Verification LBA range: start 0x0 length 0x4000 00:29:35.939 Nvme1n1 : 1.00 17042.46 66.57 0.00 0.00 7479.35 1115.67 20721.86 00:29:35.939 =================================================================================================================== 00:29:35.940 Total : 17042.46 66.57 0.00 0.00 7479.35 1115.67 20721.86 00:29:35.940 [2024-02-13 08:30:09.390823] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:35.940 08:30:09 -- host/bdevperf.sh@30 -- # bdevperfpid=2438822 00:29:35.940 08:30:09 -- host/bdevperf.sh@32 -- # sleep 3 00:29:35.940 08:30:09 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:35.940 08:30:09 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:35.940 08:30:09 -- nvmf/common.sh@520 -- # config=() 00:29:35.940 08:30:09 -- nvmf/common.sh@520 -- # local subsystem config 00:29:35.940 08:30:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:35.940 08:30:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:35.940 { 00:29:35.940 "params": { 00:29:35.940 "name": "Nvme$subsystem", 00:29:35.940 "trtype": "$TEST_TRANSPORT", 00:29:35.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.940 "adrfam": "ipv4", 00:29:35.940 "trsvcid": "$NVMF_PORT", 00:29:35.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.940 "hdgst": ${hdgst:-false}, 00:29:35.940 "ddgst": ${ddgst:-false} 00:29:35.940 }, 00:29:35.940 "method": "bdev_nvme_attach_controller" 00:29:35.940 } 00:29:35.940 EOF 00:29:35.940 )") 00:29:35.940 08:30:09 -- nvmf/common.sh@542 -- # cat 00:29:35.940 08:30:09 -- nvmf/common.sh@544 -- # jq . 00:29:35.940 08:30:09 -- nvmf/common.sh@545 -- # IFS=, 00:29:35.940 08:30:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:35.940 "params": { 00:29:35.940 "name": "Nvme1", 00:29:35.940 "trtype": "tcp", 00:29:35.940 "traddr": "10.0.0.2", 00:29:35.940 "adrfam": "ipv4", 00:29:35.940 "trsvcid": "4420", 00:29:35.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.940 "hdgst": false, 00:29:35.940 "ddgst": false 00:29:35.940 }, 00:29:35.940 "method": "bdev_nvme_attach_controller" 00:29:35.940 }' 00:29:36.199 [2024-02-13 08:30:09.633325] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:36.199 [2024-02-13 08:30:09.633372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438822 ] 00:29:36.200 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.200 [2024-02-13 08:30:09.695095] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.200 [2024-02-13 08:30:09.763541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.200 [2024-02-13 08:30:09.763597] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:36.459 Running I/O for 15 seconds... 00:29:39.000 08:30:12 -- host/bdevperf.sh@33 -- # kill -9 2437978 00:29:39.000 08:30:12 -- host/bdevperf.sh@35 -- # sleep 3 00:29:39.000 [2024-02-13 08:30:12.608739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.608994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.000 [2024-02-13 08:30:12.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.000 [2024-02-13 08:30:12.609090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.001 [2024-02-13 08:30:12.609475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.001 [2024-02-13 08:30:12.609597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.001 [2024-02-13 08:30:12.609606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.609965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.609988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.609994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.610008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.610023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.610038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.610052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.610066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.610081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.610095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.002 [2024-02-13 08:30:12.610110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.002 [2024-02-13 08:30:12.610118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.002 [2024-02-13 08:30:12.610124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.003 [2024-02-13 08:30:12.610575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.003 [2024-02-13 08:30:12.610617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.003 [2024-02-13 08:30:12.610625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.004 [2024-02-13 08:30:12.610631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.004 [2024-02-13 08:30:12.610650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.004 [2024-02-13 08:30:12.610664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.004 [2024-02-13 08:30:12.610678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610686] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c009f0 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.610694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:39.004 [2024-02-13 08:30:12.610699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:39.004 [2024-02-13 08:30:12.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106512 len:8 PRP1 0x0 PRP2 0x0 00:29:39.004 [2024-02-13 08:30:12.610713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610756] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c009f0 was disconnected and freed. reset controller. 00:29:39.004 [2024-02-13 08:30:12.610797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.004 [2024-02-13 08:30:12.610806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.004 [2024-02-13 08:30:12.610821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.004 [2024-02-13 08:30:12.610835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.004 [2024-02-13 08:30:12.610849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.004 [2024-02-13 08:30:12.610855] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.612612] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.004 [2024-02-13 08:30:12.612633] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.613355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.613723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.613734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.004 [2024-02-13 08:30:12.613741] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.613871] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.613988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.004 [2024-02-13 08:30:12.613999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.004 [2024-02-13 08:30:12.614007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.004 [2024-02-13 08:30:12.615975] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.004 [2024-02-13 08:30:12.624820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.004 [2024-02-13 08:30:12.625256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.625507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.625539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.004 [2024-02-13 08:30:12.625561] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.625906] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.626126] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.004 [2024-02-13 08:30:12.626134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.004 [2024-02-13 08:30:12.626140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.004 [2024-02-13 08:30:12.627815] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.004 [2024-02-13 08:30:12.636682] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.004 [2024-02-13 08:30:12.637144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.637509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.637540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.004 [2024-02-13 08:30:12.637561] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.637777] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.637889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.004 [2024-02-13 08:30:12.637897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.004 [2024-02-13 08:30:12.637904] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.004 [2024-02-13 08:30:12.639496] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.004 [2024-02-13 08:30:12.648506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.004 [2024-02-13 08:30:12.649060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.649415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.649446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.004 [2024-02-13 08:30:12.649468] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.649912] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.650085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.004 [2024-02-13 08:30:12.650093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.004 [2024-02-13 08:30:12.650103] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.004 [2024-02-13 08:30:12.651628] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.004 [2024-02-13 08:30:12.660366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.004 [2024-02-13 08:30:12.660843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.661204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.004 [2024-02-13 08:30:12.661234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.004 [2024-02-13 08:30:12.661256] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.004 [2024-02-13 08:30:12.661584] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.004 [2024-02-13 08:30:12.661882] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.004 [2024-02-13 08:30:12.661908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.004 [2024-02-13 08:30:12.661929] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.004 [2024-02-13 08:30:12.663848] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.005 [2024-02-13 08:30:12.672091] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.005 [2024-02-13 08:30:12.672632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.005 [2024-02-13 08:30:12.673004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.005 [2024-02-13 08:30:12.673036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.005 [2024-02-13 08:30:12.673057] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.005 [2024-02-13 08:30:12.673488] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.005 [2024-02-13 08:30:12.673832] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.005 [2024-02-13 08:30:12.673859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.005 [2024-02-13 08:30:12.673879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.005 [2024-02-13 08:30:12.675742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.005 [2024-02-13 08:30:12.684000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.266 [2024-02-13 08:30:12.684534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.684920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.684932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.266 [2024-02-13 08:30:12.684938] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.266 [2024-02-13 08:30:12.685053] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.266 [2024-02-13 08:30:12.685181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.266 [2024-02-13 08:30:12.685189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.266 [2024-02-13 08:30:12.685195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.266 [2024-02-13 08:30:12.687040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.266 [2024-02-13 08:30:12.695826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.266 [2024-02-13 08:30:12.696340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.696619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.696664] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.266 [2024-02-13 08:30:12.696687] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.266 [2024-02-13 08:30:12.696939] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.266 [2024-02-13 08:30:12.697053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.266 [2024-02-13 08:30:12.697062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.266 [2024-02-13 08:30:12.697068] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.266 [2024-02-13 08:30:12.698901] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.266 [2024-02-13 08:30:12.707776] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.266 [2024-02-13 08:30:12.708287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.708535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.708546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.266 [2024-02-13 08:30:12.708552] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.266 [2024-02-13 08:30:12.708657] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.266 [2024-02-13 08:30:12.708799] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.266 [2024-02-13 08:30:12.708807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.266 [2024-02-13 08:30:12.708814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.266 [2024-02-13 08:30:12.710615] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.266 [2024-02-13 08:30:12.719873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.266 [2024-02-13 08:30:12.720381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.720618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.720628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.266 [2024-02-13 08:30:12.720635] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.266 [2024-02-13 08:30:12.720754] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.266 [2024-02-13 08:30:12.720854] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.266 [2024-02-13 08:30:12.720861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.266 [2024-02-13 08:30:12.720868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.266 [2024-02-13 08:30:12.722574] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.266 [2024-02-13 08:30:12.731856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.266 [2024-02-13 08:30:12.732310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.732656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.266 [2024-02-13 08:30:12.732666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.266 [2024-02-13 08:30:12.732673] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.266 [2024-02-13 08:30:12.732831] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.266 [2024-02-13 08:30:12.732959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.266 [2024-02-13 08:30:12.732967] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.266 [2024-02-13 08:30:12.732974] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.734706] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.743759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.744186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.744502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.744512] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.744519] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.744633] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.744721] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.744729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.744735] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.746575] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.755700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.756235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.756565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.756595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.756616] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.757063] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.757326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.757335] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.757340] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.759199] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.767617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.768142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.768430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.768462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.768483] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.768828] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.769141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.769149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.769156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.770938] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.779487] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.780004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.780408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.780439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.780461] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.780904] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.781212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.781224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.781234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.784082] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.792233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.792649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.793022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.793053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.793074] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.793425] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.793498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.793507] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.793513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.795338] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.804072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.804463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.804769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.804783] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.804814] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.805144] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.805445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.805453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.805459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.807277] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.815844] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.816350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.816706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.816739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.816761] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.817141] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.817500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.267 [2024-02-13 08:30:12.817509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.267 [2024-02-13 08:30:12.817515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.267 [2024-02-13 08:30:12.819112] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.267 [2024-02-13 08:30:12.827726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.267 [2024-02-13 08:30:12.828232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.828616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.267 [2024-02-13 08:30:12.828661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.267 [2024-02-13 08:30:12.828683] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.267 [2024-02-13 08:30:12.829112] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.267 [2024-02-13 08:30:12.829291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.829299] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.829305] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.831039] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.839568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.840077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.840484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.840516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.840545] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.840790] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.841224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.841248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.841269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.843981] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.852218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.852733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.853141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.853179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.853186] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.853322] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.853426] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.853435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.853441] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.855313] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.863987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.864442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.864678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.864711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.864732] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.865211] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.865412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.865420] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.865425] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.867226] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.875985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.876469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.876866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.876902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.876924] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.877313] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.877624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.877632] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.877638] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.879425] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.887847] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.888249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.888603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.888633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.888668] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.888950] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.889281] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.889305] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.889331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.891123] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.899706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.900167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.900512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.900542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.900564] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.900791] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.900887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.900894] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.900900] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.902476] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.911550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.912083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.912414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.912446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.912468] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.912684] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.912772] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.912780] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.912787] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.914456] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.923446] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.923898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.924238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.924269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.924290] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.924659] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.268 [2024-02-13 08:30:12.924785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.268 [2024-02-13 08:30:12.924793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.268 [2024-02-13 08:30:12.924799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.268 [2024-02-13 08:30:12.926628] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.268 [2024-02-13 08:30:12.935164] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.268 [2024-02-13 08:30:12.935632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.936005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.268 [2024-02-13 08:30:12.936038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.268 [2024-02-13 08:30:12.936059] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.268 [2024-02-13 08:30:12.936390] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.269 [2024-02-13 08:30:12.936651] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.269 [2024-02-13 08:30:12.936660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.269 [2024-02-13 08:30:12.936666] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.269 [2024-02-13 08:30:12.938314] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.269 [2024-02-13 08:30:12.947109] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.269 [2024-02-13 08:30:12.947658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-02-13 08:30:12.948019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.269 [2024-02-13 08:30:12.948050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.269 [2024-02-13 08:30:12.948073] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.269 [2024-02-13 08:30:12.948403] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.269 [2024-02-13 08:30:12.948664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.269 [2024-02-13 08:30:12.948676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.269 [2024-02-13 08:30:12.948682] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.269 [2024-02-13 08:30:12.950439] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:12.959049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:12.959597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.959914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.959925] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:12.959932] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:12.960072] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:12.960182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:12.960189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.530 [2024-02-13 08:30:12.960195] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.530 [2024-02-13 08:30:12.962066] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:12.970919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:12.971423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.971759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.971793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:12.971814] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:12.972029] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:12.972139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:12.972148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.530 [2024-02-13 08:30:12.972153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.530 [2024-02-13 08:30:12.973835] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:12.982537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:12.983086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.983473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.983505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:12.983526] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:12.983733] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:12.983830] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:12.983838] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.530 [2024-02-13 08:30:12.983846] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.530 [2024-02-13 08:30:12.985517] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:12.994330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:12.994797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.995157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:12.995188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:12.995210] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:12.995491] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:12.995654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:12.995663] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.530 [2024-02-13 08:30:12.995669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.530 [2024-02-13 08:30:12.997444] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:13.006089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:13.006586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:13.006876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:13.006910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:13.006933] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:13.007462] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:13.007731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:13.007740] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.530 [2024-02-13 08:30:13.007746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.530 [2024-02-13 08:30:13.009470] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.530 [2024-02-13 08:30:13.017908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.530 [2024-02-13 08:30:13.018409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:13.018741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.530 [2024-02-13 08:30:13.018773] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.530 [2024-02-13 08:30:13.018795] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.530 [2024-02-13 08:30:13.018993] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.530 [2024-02-13 08:30:13.019089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.530 [2024-02-13 08:30:13.019097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.019103] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.020706] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.029828] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.030314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.030666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.030699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.030721] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.031003] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.031285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.031309] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.031330] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.033548] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.041674] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.042192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.042500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.042532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.042555] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.042903] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.043060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.043068] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.043075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.044788] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.053490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.054028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.054434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.054466] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.054488] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.054711] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.054810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.054818] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.054824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.056626] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.065379] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.065781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.066127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.066158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.066180] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.066610] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.066835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.066843] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.066849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.068520] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.077226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.077728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.078082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.078114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.078135] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.078564] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.078865] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.078874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.078879] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.080520] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.089066] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.089581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.089888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.089900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.089907] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.090018] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.090127] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.090135] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.090141] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.091895] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.101002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.101489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.101728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.101761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.101783] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.102025] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.102149] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.102157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.102163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.103720] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.112758] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.113233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.113597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.113628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.113664] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.113996] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.114144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.531 [2024-02-13 08:30:13.114152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.531 [2024-02-13 08:30:13.114158] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.531 [2024-02-13 08:30:13.115951] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.531 [2024-02-13 08:30:13.124609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.531 [2024-02-13 08:30:13.125151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.125370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.531 [2024-02-13 08:30:13.125401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.531 [2024-02-13 08:30:13.125422] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.531 [2024-02-13 08:30:13.125765] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.531 [2024-02-13 08:30:13.126156] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.126164] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.126170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.127843] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.136541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.136973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.137347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.137378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.137400] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.137806] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.138191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.138215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.138235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.139977] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.148273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.148764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.149114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.149145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.149166] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.149594] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.149987] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.150012] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.150032] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.151922] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.159996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.160526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.160806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.160818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.160825] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.160949] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.161059] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.161066] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.161072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.162786] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.171820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.172305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.172726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.172759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.172788] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.172976] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.173086] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.173094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.173100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.174866] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.183706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.184235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.184620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.184664] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.184687] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.184885] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.184972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.184984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.184994] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.187628] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.196502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.196999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.197342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.197373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.197394] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.197789] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.198369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.198393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.198413] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.200234] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.532 [2024-02-13 08:30:13.208311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.532 [2024-02-13 08:30:13.208844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.209132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.532 [2024-02-13 08:30:13.209163] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.532 [2024-02-13 08:30:13.209184] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.532 [2024-02-13 08:30:13.209570] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.532 [2024-02-13 08:30:13.209914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.532 [2024-02-13 08:30:13.209940] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.532 [2024-02-13 08:30:13.209959] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.532 [2024-02-13 08:30:13.212024] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.793 [2024-02-13 08:30:13.220241] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.793 [2024-02-13 08:30:13.220764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.793 [2024-02-13 08:30:13.221133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.793 [2024-02-13 08:30:13.221164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.793 [2024-02-13 08:30:13.221186] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.793 [2024-02-13 08:30:13.221466] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.793 [2024-02-13 08:30:13.221634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.221642] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.221653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.223543] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.231922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.232361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.232658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.232668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.232686] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.232838] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.232962] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.232970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.232976] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.234790] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.243711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.244039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.244323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.244333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.244339] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.244416] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.244549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.244557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.244562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.246129] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.255478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.255984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.256342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.256352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.256358] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.256488] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.256592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.256599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.256605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.258264] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.267242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.267720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.268103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.268134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.268156] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.268534] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.268709] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.268718] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.268724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.270379] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.279062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.279603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.279967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.280000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.280023] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.280354] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.280592] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.280603] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.280609] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.282284] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.290839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.291333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.291674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.291707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.291729] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.292060] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.292440] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.292464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.292485] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.294299] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.302611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.303116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.303498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.303529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.303554] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.303663] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.303746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.303753] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.303759] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.305409] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.314447] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.314876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.315248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.315279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.315301] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.315642] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.794 [2024-02-13 08:30:13.315756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.794 [2024-02-13 08:30:13.315764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.794 [2024-02-13 08:30:13.315773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.794 [2024-02-13 08:30:13.317492] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.794 [2024-02-13 08:30:13.326202] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.794 [2024-02-13 08:30:13.326722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.327087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.794 [2024-02-13 08:30:13.327118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.794 [2024-02-13 08:30:13.327139] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.794 [2024-02-13 08:30:13.327469] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.327688] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.327697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.327704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.329413] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.338049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.338525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.338929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.338963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.338985] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.339365] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.339704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.339713] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.339720] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.341363] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.349995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.350449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.350855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.350888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.350912] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.351294] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.351733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.351760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.351781] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.353552] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.361934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.362422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.362816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.362851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.362873] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.363000] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.363096] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.363103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.363109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.364860] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.373908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.374375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.374738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.374771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.374793] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.375174] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.375565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.375574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.375581] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.377428] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.385685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.386210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.386520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.386551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.386573] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.386799] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.386895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.386904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.386910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.388580] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.397551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.398023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.398347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.398378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.398400] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.398845] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.399009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.399017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.399023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.400535] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.409404] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.409933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.410186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.410218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.410240] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.410678] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.410791] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.410799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.410805] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.412436] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.421235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.421771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.422112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.422143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.795 [2024-02-13 08:30:13.422164] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.795 [2024-02-13 08:30:13.422443] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.795 [2024-02-13 08:30:13.422771] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.795 [2024-02-13 08:30:13.422779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.795 [2024-02-13 08:30:13.422785] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.795 [2024-02-13 08:30:13.424363] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.795 [2024-02-13 08:30:13.433100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.795 [2024-02-13 08:30:13.433590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.433942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.795 [2024-02-13 08:30:13.433974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.796 [2024-02-13 08:30:13.433997] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.796 [2024-02-13 08:30:13.434213] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.796 [2024-02-13 08:30:13.434338] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.796 [2024-02-13 08:30:13.434346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.796 [2024-02-13 08:30:13.434352] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.796 [2024-02-13 08:30:13.436207] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.796 [2024-02-13 08:30:13.444978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.796 [2024-02-13 08:30:13.445404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.445780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.445812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.796 [2024-02-13 08:30:13.445834] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.796 [2024-02-13 08:30:13.446114] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.796 [2024-02-13 08:30:13.446396] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.796 [2024-02-13 08:30:13.446421] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.796 [2024-02-13 08:30:13.446441] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.796 [2024-02-13 08:30:13.448408] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.796 [2024-02-13 08:30:13.456724] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.796 [2024-02-13 08:30:13.457168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.457598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.457629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.796 [2024-02-13 08:30:13.457663] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.796 [2024-02-13 08:30:13.457881] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.796 [2024-02-13 08:30:13.457977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.796 [2024-02-13 08:30:13.457985] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.796 [2024-02-13 08:30:13.457992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.796 [2024-02-13 08:30:13.459668] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:39.796 [2024-02-13 08:30:13.468440] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.796 [2024-02-13 08:30:13.468924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.469295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.796 [2024-02-13 08:30:13.469328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:39.796 [2024-02-13 08:30:13.469351] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:39.796 [2024-02-13 08:30:13.469790] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:39.796 [2024-02-13 08:30:13.470087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:39.796 [2024-02-13 08:30:13.470095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:39.796 [2024-02-13 08:30:13.470101] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.796 [2024-02-13 08:30:13.471723] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.057 [2024-02-13 08:30:13.480338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.057 [2024-02-13 08:30:13.480797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.481094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.481124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.057 [2024-02-13 08:30:13.481146] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.057 [2024-02-13 08:30:13.481475] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.057 [2024-02-13 08:30:13.481813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.057 [2024-02-13 08:30:13.481821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.057 [2024-02-13 08:30:13.481827] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.057 [2024-02-13 08:30:13.483563] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.057 [2024-02-13 08:30:13.491997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.057 [2024-02-13 08:30:13.492576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.492954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.492986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.057 [2024-02-13 08:30:13.493007] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.057 [2024-02-13 08:30:13.493130] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.057 [2024-02-13 08:30:13.493239] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.057 [2024-02-13 08:30:13.493247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.057 [2024-02-13 08:30:13.493253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.057 [2024-02-13 08:30:13.494902] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.057 [2024-02-13 08:30:13.503896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.057 [2024-02-13 08:30:13.504464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.504870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.504902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.057 [2024-02-13 08:30:13.504930] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.057 [2024-02-13 08:30:13.505040] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.057 [2024-02-13 08:30:13.505136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.057 [2024-02-13 08:30:13.505144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.057 [2024-02-13 08:30:13.505149] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.057 [2024-02-13 08:30:13.506833] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.057 [2024-02-13 08:30:13.515711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.057 [2024-02-13 08:30:13.516185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.057 [2024-02-13 08:30:13.516515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.516547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.516569] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.516879] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.517033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.517041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.517047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.518612] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.527529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.528020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.528376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.528407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.528429] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.528768] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.529227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.529239] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.529249] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.532288] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.540288] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.540810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.541101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.541131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.541153] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.541337] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.541457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.541466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.541472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.543320] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.552144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.552686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.553025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.553056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.553077] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.553316] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.553398] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.553406] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.553412] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.555183] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.563922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.564379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.564732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.564765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.564787] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.565018] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.565392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.565400] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.565405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.567040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.575795] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.576186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.576584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.576594] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.576601] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.576672] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.576785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.576792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.576798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.578523] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.587681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.588080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.588414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.588446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.588468] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.588857] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.589303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.589311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.589317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.591057] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.599860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.600347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.600747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.600780] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.600802] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.601281] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.601475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.601483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.601489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.058 [2024-02-13 08:30:13.603188] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.058 [2024-02-13 08:30:13.611582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.058 [2024-02-13 08:30:13.611999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.612238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.058 [2024-02-13 08:30:13.612248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.058 [2024-02-13 08:30:13.612255] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.058 [2024-02-13 08:30:13.612337] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.058 [2024-02-13 08:30:13.612434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.058 [2024-02-13 08:30:13.612442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.058 [2024-02-13 08:30:13.612448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.614136] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.623374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.623786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.624075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.624106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.624128] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.624333] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.624451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.624459] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.624464] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.626269] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.635421] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.635911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.636255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.636286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.636308] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.636695] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.637027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.637052] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.637072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.639037] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.647191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.647711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.648004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.648035] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.648057] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.648214] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.648324] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.648332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.648341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.650036] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.659110] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.659673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.660012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.660044] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.660066] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.660296] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.660406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.660414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.660420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.662175] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.670878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.671331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.671726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.671759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.671781] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.672161] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.672487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.672495] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.672501] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.674443] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.682726] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.683128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.683374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.683405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.683427] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.683770] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.684151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.684176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.684203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.686318] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.694544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.694980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.695308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.695340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.695362] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.695803] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.696234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.696258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.696278] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.698205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.706413] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.706967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.707307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.707338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.707360] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.707544] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.707657] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.707665] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.059 [2024-02-13 08:30:13.707672] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.059 [2024-02-13 08:30:13.709508] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.059 [2024-02-13 08:30:13.718339] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.059 [2024-02-13 08:30:13.718881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.719172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.059 [2024-02-13 08:30:13.719204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.059 [2024-02-13 08:30:13.719226] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.059 [2024-02-13 08:30:13.719463] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.059 [2024-02-13 08:30:13.719573] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.059 [2024-02-13 08:30:13.719582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.060 [2024-02-13 08:30:13.719587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.060 [2024-02-13 08:30:13.721172] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.060 [2024-02-13 08:30:13.730087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.060 [2024-02-13 08:30:13.730501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-02-13 08:30:13.730795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.060 [2024-02-13 08:30:13.730807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.060 [2024-02-13 08:30:13.730813] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.060 [2024-02-13 08:30:13.730938] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.060 [2024-02-13 08:30:13.731005] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.060 [2024-02-13 08:30:13.731012] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.060 [2024-02-13 08:30:13.731017] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.060 [2024-02-13 08:30:13.732666] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.060 [2024-02-13 08:30:13.742191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.321 [2024-02-13 08:30:13.742817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.743070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.743081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.321 [2024-02-13 08:30:13.743088] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.321 [2024-02-13 08:30:13.743201] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.321 [2024-02-13 08:30:13.743299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.321 [2024-02-13 08:30:13.743307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.321 [2024-02-13 08:30:13.743313] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.321 [2024-02-13 08:30:13.745087] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.321 [2024-02-13 08:30:13.754075] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.321 [2024-02-13 08:30:13.754436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.754804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.754817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.321 [2024-02-13 08:30:13.754824] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.321 [2024-02-13 08:30:13.754910] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.321 [2024-02-13 08:30:13.755052] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.321 [2024-02-13 08:30:13.755061] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.321 [2024-02-13 08:30:13.755067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.321 [2024-02-13 08:30:13.756742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.321 [2024-02-13 08:30:13.766097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.321 [2024-02-13 08:30:13.766616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.766906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.766918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.321 [2024-02-13 08:30:13.766924] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.321 [2024-02-13 08:30:13.766994] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.321 [2024-02-13 08:30:13.767121] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.321 [2024-02-13 08:30:13.767128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.321 [2024-02-13 08:30:13.767134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.321 [2024-02-13 08:30:13.768835] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.321 [2024-02-13 08:30:13.777925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.321 [2024-02-13 08:30:13.778388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.778695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.778706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.321 [2024-02-13 08:30:13.778712] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.321 [2024-02-13 08:30:13.778827] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.321 [2024-02-13 08:30:13.778969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.321 [2024-02-13 08:30:13.778977] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.321 [2024-02-13 08:30:13.778983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.321 [2024-02-13 08:30:13.780757] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.321 [2024-02-13 08:30:13.789880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.321 [2024-02-13 08:30:13.790325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.790614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.321 [2024-02-13 08:30:13.790624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.321 [2024-02-13 08:30:13.790631] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.321 [2024-02-13 08:30:13.790762] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.321 [2024-02-13 08:30:13.790875] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.321 [2024-02-13 08:30:13.790883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.321 [2024-02-13 08:30:13.790889] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.792618] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.801807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.802295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.802742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.802775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.802797] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.802993] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.803106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.803114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.803120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.805802] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.814447] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.814973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.815295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.815326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.815347] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.815899] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.816004] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.816013] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.816019] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.817888] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.826426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.826903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.827251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.827282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.827304] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.827739] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.828019] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.828027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.828033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.829749] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.838213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.838680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.839066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.839076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.839086] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.839229] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.839343] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.839351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.839357] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.841046] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.849967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.850465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.850825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.850835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.850843] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.850971] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.851098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.851106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.851112] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.852742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.861842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.862300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.862680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.862690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.862697] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.862808] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.862903] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.862910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.862916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.864635] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.873689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.874211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.874523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.874555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.874576] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.875044] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.875156] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.875164] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.875170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.876788] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.885540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.886058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.886458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.886490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.322 [2024-02-13 08:30:13.886512] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.322 [2024-02-13 08:30:13.886745] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.322 [2024-02-13 08:30:13.886875] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.322 [2024-02-13 08:30:13.886883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.322 [2024-02-13 08:30:13.886889] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.322 [2024-02-13 08:30:13.888721] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.322 [2024-02-13 08:30:13.897274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.322 [2024-02-13 08:30:13.897757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.898162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.322 [2024-02-13 08:30:13.898192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.898213] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.898593] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.898808] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.898816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.898822] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.900532] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.909036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.909549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.909983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.910017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.910040] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.910199] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.910309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.910317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.910323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.912059] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.920954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.921403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.921830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.921863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.921885] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.922087] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.922212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.922220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.922225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.924085] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.932664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.933142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.933501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.933531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.933553] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.933937] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.934048] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.934056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.934062] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.935637] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.944460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.944954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.945277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.945309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.945333] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.945776] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.945968] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.945976] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.945981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.947692] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.956328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.956848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.957191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.957223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.957245] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.957689] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.957924] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.957948] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.957969] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.960004] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.968135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.968611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.968956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.968989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.969011] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.969489] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.969830] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.969856] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.969876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.971745] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.980145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.980670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.981096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.981127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.981148] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.981308] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.981432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.981443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.981449] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.983062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:13.991901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:13.992366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.992761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:13.992772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.323 [2024-02-13 08:30:13.992779] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.323 [2024-02-13 08:30:13.992890] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.323 [2024-02-13 08:30:13.992985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.323 [2024-02-13 08:30:13.992992] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.323 [2024-02-13 08:30:13.992998] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.323 [2024-02-13 08:30:13.994699] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.323 [2024-02-13 08:30:14.003699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.323 [2024-02-13 08:30:14.004176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.323 [2024-02-13 08:30:14.004539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.324 [2024-02-13 08:30:14.004570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.324 [2024-02-13 08:30:14.004592] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.324 [2024-02-13 08:30:14.005033] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.324 [2024-02-13 08:30:14.005364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.324 [2024-02-13 08:30:14.005388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.324 [2024-02-13 08:30:14.005408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.585 [2024-02-13 08:30:14.008495] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.585 [2024-02-13 08:30:14.016382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.585 [2024-02-13 08:30:14.016857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.017233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.017244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.585 [2024-02-13 08:30:14.017251] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.585 [2024-02-13 08:30:14.017403] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.585 [2024-02-13 08:30:14.017507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.585 [2024-02-13 08:30:14.017515] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.585 [2024-02-13 08:30:14.017526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.585 [2024-02-13 08:30:14.019289] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.585 [2024-02-13 08:30:14.028142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.585 [2024-02-13 08:30:14.028607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.028986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.029021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.585 [2024-02-13 08:30:14.029044] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.585 [2024-02-13 08:30:14.029509] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.585 [2024-02-13 08:30:14.029619] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.585 [2024-02-13 08:30:14.029627] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.585 [2024-02-13 08:30:14.029633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.585 [2024-02-13 08:30:14.031375] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.585 [2024-02-13 08:30:14.040031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.585 [2024-02-13 08:30:14.040560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.040920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.040953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.585 [2024-02-13 08:30:14.040975] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.585 [2024-02-13 08:30:14.041136] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.585 [2024-02-13 08:30:14.041260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.585 [2024-02-13 08:30:14.041268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.585 [2024-02-13 08:30:14.041274] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.585 [2024-02-13 08:30:14.042885] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.585 [2024-02-13 08:30:14.051930] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.585 [2024-02-13 08:30:14.052392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.052783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.052817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.585 [2024-02-13 08:30:14.052839] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.585 [2024-02-13 08:30:14.053120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.585 [2024-02-13 08:30:14.053500] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.585 [2024-02-13 08:30:14.053527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.585 [2024-02-13 08:30:14.053533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.585 [2024-02-13 08:30:14.055154] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.585 [2024-02-13 08:30:14.063912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.585 [2024-02-13 08:30:14.064398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.064784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.585 [2024-02-13 08:30:14.064816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.585 [2024-02-13 08:30:14.064838] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.585 [2024-02-13 08:30:14.065315] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.585 [2024-02-13 08:30:14.065591] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.585 [2024-02-13 08:30:14.065599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.585 [2024-02-13 08:30:14.065605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.067332] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.075815] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.076300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.076670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.076703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.076724] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.077004] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.077270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.077278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.077284] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.078979] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.087566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.088057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.088413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.088444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.088465] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.088595] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.088742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.088751] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.088757] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.090347] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.099223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.099729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.100151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.100183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.100204] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.100310] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.100449] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.100456] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.100462] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.102221] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.111035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.111594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.111940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.111973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.111995] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.112425] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.112714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.112723] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.112729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.114413] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.122804] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.123288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.123692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.123725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.123754] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.123835] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.123959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.123967] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.123973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.125600] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.134841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.135361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.135697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.135708] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.135715] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.135814] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.135921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.135928] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.135933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.137728] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.146643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.147137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.147498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.147529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.147551] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.147943] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.148275] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.148299] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.148319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.150201] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.158521] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.158981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.159368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.159398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.586 [2024-02-13 08:30:14.159419] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.586 [2024-02-13 08:30:14.159762] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.586 [2024-02-13 08:30:14.159887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.586 [2024-02-13 08:30:14.159895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.586 [2024-02-13 08:30:14.159901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.586 [2024-02-13 08:30:14.161572] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.586 [2024-02-13 08:30:14.170458] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.586 [2024-02-13 08:30:14.170930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.171353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.586 [2024-02-13 08:30:14.171392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.171414] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.171595] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.171710] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.171718] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.171725] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.173432] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.182334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.182807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.183207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.183238] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.183259] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.183530] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.183607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.183615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.183620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.185255] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.194163] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.194668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.195025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.195056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.195077] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.195506] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.195662] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.195670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.195676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.197451] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.205974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.206477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.206827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.206861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.206890] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.207319] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.207659] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.207686] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.207706] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.210812] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.218599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.219145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.219499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.219530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.219550] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.219758] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.219847] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.219855] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.219862] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.221683] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.230317] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.230808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.231185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.231195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.231201] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.231283] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.231393] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.231400] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.231406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.233165] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.242125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.242640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.243068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.243100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.243121] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.243458] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.243627] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.243635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.243641] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.245222] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.253944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.254453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.254885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.254918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.254940] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.255370] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.255666] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.255674] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.587 [2024-02-13 08:30:14.255680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.587 [2024-02-13 08:30:14.257416] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.587 [2024-02-13 08:30:14.265919] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.587 [2024-02-13 08:30:14.266427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.266807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.587 [2024-02-13 08:30:14.266818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.587 [2024-02-13 08:30:14.266824] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.587 [2024-02-13 08:30:14.266924] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.587 [2024-02-13 08:30:14.267007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.587 [2024-02-13 08:30:14.267014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.588 [2024-02-13 08:30:14.267020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.588 [2024-02-13 08:30:14.268676] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.277951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.278425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.278818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.278829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.278836] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.278960] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.279087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.279094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.279100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.280821] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.289896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.290310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.290674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.290685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.290691] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.290811] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.290915] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.290922] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.290928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.292725] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.301452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.301943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.302307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.302339] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.302361] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.302747] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.302902] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.302910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.302916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.304638] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.313312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.313809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.314204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.314235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.314257] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.314537] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.314837] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.314848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.314854] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.316534] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.325098] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.325620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.325985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.326018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.326041] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.326267] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.326393] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.326401] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.326407] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.328056] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.337002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.337503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.337901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.337936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.849 [2024-02-13 08:30:14.337958] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.849 [2024-02-13 08:30:14.338055] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.849 [2024-02-13 08:30:14.338119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.849 [2024-02-13 08:30:14.338126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.849 [2024-02-13 08:30:14.338132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.849 [2024-02-13 08:30:14.339742] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.849 [2024-02-13 08:30:14.348714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.849 [2024-02-13 08:30:14.349146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.349529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.849 [2024-02-13 08:30:14.349559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.349582] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.350079] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.350274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.350282] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.350291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.352017] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.360542] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.361064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.361471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.361502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.361524] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.361866] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.362182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.362190] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.362196] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.363805] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.372373] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.372859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.373095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.373105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.373111] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.373215] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.373292] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.373299] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.373304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.374976] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.384125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.384675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.385134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.385165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.385187] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.385616] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.385958] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.385983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.386003] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.388009] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.396000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.396499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.396879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.396911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.396933] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.397121] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.397246] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.397254] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.397260] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.398902] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.407770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.408276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.408630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.408640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.408652] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.408777] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.408874] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.408881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.408888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.410690] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.419371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.419901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.420076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.420107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.420128] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.420332] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.420428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.420436] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.420443] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.422229] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.431224] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.431747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.432148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.432179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.432201] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.850 [2024-02-13 08:30:14.432367] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.850 [2024-02-13 08:30:14.432477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.850 [2024-02-13 08:30:14.432485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.850 [2024-02-13 08:30:14.432491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.850 [2024-02-13 08:30:14.434117] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.850 [2024-02-13 08:30:14.442929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.850 [2024-02-13 08:30:14.443454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.443793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.850 [2024-02-13 08:30:14.443826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.850 [2024-02-13 08:30:14.443848] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.444378] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.444582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.444590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.444596] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.446255] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.454869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.455411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.455790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.455824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.455845] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.456177] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.456459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.456483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.456504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.458091] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.466704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.467181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.467592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.467623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.467645] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.467941] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.468153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.468161] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.468167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.469901] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.478548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.478984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.479390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.479421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.479443] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.479839] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.480145] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.480153] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.480158] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.482539] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.491372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.491847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.492183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.492194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.492201] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.492305] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.492456] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.492464] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.492470] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.494079] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.503156] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.503680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.504018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.504057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.504078] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.504436] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.504540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.504548] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.504553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.506305] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.515025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.515529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.515910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.515943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.515964] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.516346] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.516613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.516621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.516626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.518206] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:40.851 [2024-02-13 08:30:14.526774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:40.851 [2024-02-13 08:30:14.527296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.527626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.851 [2024-02-13 08:30:14.527671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:40.851 [2024-02-13 08:30:14.527694] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:40.851 [2024-02-13 08:30:14.527975] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:40.851 [2024-02-13 08:30:14.528355] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:40.851 [2024-02-13 08:30:14.528378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:40.851 [2024-02-13 08:30:14.528399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:40.851 [2024-02-13 08:30:14.530397] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.538665] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.539187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.539541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.539572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.539600] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.539728] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.539856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.539864] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.539870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.541642] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.550560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.551060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.551369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.551379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.551386] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.551511] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.551591] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.551598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.551604] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.553359] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.562275] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.562692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.563094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.563125] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.563147] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.563526] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.563798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.563806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.563812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.565518] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.574069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.574622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.574964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.574996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.575018] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.575557] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.575789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.575797] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.575803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.577419] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.585576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.586073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.586478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.586508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.586531] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.586698] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.586824] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.586831] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.586838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.588520] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.597398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.597806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.598194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.113 [2024-02-13 08:30:14.598225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.113 [2024-02-13 08:30:14.598246] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.113 [2024-02-13 08:30:14.598499] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.113 [2024-02-13 08:30:14.598590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.113 [2024-02-13 08:30:14.598597] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.113 [2024-02-13 08:30:14.598603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.113 [2024-02-13 08:30:14.600341] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.113 [2024-02-13 08:30:14.609146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.113 [2024-02-13 08:30:14.609589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.609944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.609977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.609999] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.610329] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.610493] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.610501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.610507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.612289] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.621018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.621508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.621863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.621895] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.621925] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.622154] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.622313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.622324] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.622335] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.625382] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.633664] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.634299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.634644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.634690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.634712] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.634978] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.635070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.635079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.635086] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.636951] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.645586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.646086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.646375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.646406] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.646428] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.646769] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.646941] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.646951] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.646958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.648455] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.657475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.657982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.658387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.658418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.658439] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.658733] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.658967] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.658998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.659004] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.660729] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.669299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.669819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.670187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.670218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.670240] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.670740] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.670851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.670859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.670865] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.672522] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.681423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.681926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.682222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.682231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.682238] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.682329] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.682433] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.682440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.682448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.684163] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.693197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.693689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.694060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.694090] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.694113] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.694492] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.694936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.114 [2024-02-13 08:30:14.694962] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.114 [2024-02-13 08:30:14.694994] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.114 [2024-02-13 08:30:14.696622] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.114 [2024-02-13 08:30:14.705053] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.114 [2024-02-13 08:30:14.705497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.705840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.114 [2024-02-13 08:30:14.705874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.114 [2024-02-13 08:30:14.705896] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.114 [2024-02-13 08:30:14.706120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.114 [2024-02-13 08:30:14.706216] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.706224] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.706230] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.707829] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.716914] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.717390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.717722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.717734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.717740] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.717866] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.718018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.718026] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.718032] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.719752] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.728708] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.729177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.729578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.729609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.729631] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.729975] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.730132] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.730140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.730146] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.731848] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.740546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.741056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.741345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.741356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.741363] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.741488] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.741597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.741605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.741611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.743360] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.752406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.752818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.753099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.753110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.753116] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.753256] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.753336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.753344] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.753350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.755062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.764209] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.764705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.764994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.765006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.765013] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.765141] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.765239] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.765247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.765253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.766916] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.776037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.776545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.776878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.776913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.776934] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.777162] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.777286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.777295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.777301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.779052] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.115 [2024-02-13 08:30:14.787742] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.115 [2024-02-13 08:30:14.788236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.788590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.115 [2024-02-13 08:30:14.788621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.115 [2024-02-13 08:30:14.788643] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.115 [2024-02-13 08:30:14.789088] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.115 [2024-02-13 08:30:14.789322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.115 [2024-02-13 08:30:14.789330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.115 [2024-02-13 08:30:14.789336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.115 [2024-02-13 08:30:14.791062] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.376 [2024-02-13 08:30:14.799783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.376 [2024-02-13 08:30:14.800246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-02-13 08:30:14.800600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.800632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.800665] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.801044] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.801328] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.801352] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.801358] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.803068] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.811610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.812079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.812408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.812439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.812460] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.812803] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.813191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.813199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.813206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.815048] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.823480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.823901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.824208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.824219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.824225] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.824325] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.824452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.824460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.824467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.826414] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.835450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.835962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.836218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.836231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.836238] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.836337] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.836465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.836473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.836479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.838271] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.847691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.848133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.848422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.848452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.848475] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.848916] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.849135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.849143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.849150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.850906] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.859492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.859852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.860094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.860104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.860110] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.860235] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.860316] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.860323] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.860329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.862173] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.871481] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.871811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.872063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.872074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.872084] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.872212] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.872325] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.872333] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.872339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.874183] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.883567] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.884047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.884340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.884371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.884391] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.884545] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.884613] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.884621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.884627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.886411] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.895566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.377 [2024-02-13 08:30:14.895936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.896295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-02-13 08:30:14.896326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.377 [2024-02-13 08:30:14.896348] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.377 [2024-02-13 08:30:14.896790] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.377 [2024-02-13 08:30:14.897172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.377 [2024-02-13 08:30:14.897196] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.377 [2024-02-13 08:30:14.897217] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.377 [2024-02-13 08:30:14.899021] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.377 [2024-02-13 08:30:14.907314] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.907685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.907929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.907938] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.907945] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.908087] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.908196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.908204] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.908210] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.910078] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.919202] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.919681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.920014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.920045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.920067] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.920545] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.920708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.920717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.920723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.922460] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.931044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.931373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.931711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.931745] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.931767] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.932194] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.932575] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.932599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.932619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.934441] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.942798] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.943198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.943434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.943444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.943450] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.943547] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.943645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.943658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.943664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.945407] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.954558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.955375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.955407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.955429] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.955723] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.956155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.956180] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.956201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.959352] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.967484] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.967914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.968155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.968166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.968173] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.968262] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.968351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.968359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.968365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.970220] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.979462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.979844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.980177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.980208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.980229] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.980462] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.980558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.980570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.980576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.982321] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:14.991570] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:14.992039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.992288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:14.992320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:14.992343] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:14.992686] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:14.992883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:14.992891] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:14.992896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:14.994561] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:15.003499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.378 [2024-02-13 08:30:15.003973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:15.004251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-02-13 08:30:15.004282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.378 [2024-02-13 08:30:15.004303] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.378 [2024-02-13 08:30:15.004565] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.378 [2024-02-13 08:30:15.004680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.378 [2024-02-13 08:30:15.004689] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.378 [2024-02-13 08:30:15.004695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.378 [2024-02-13 08:30:15.006317] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.378 [2024-02-13 08:30:15.015289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.379 [2024-02-13 08:30:15.015724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.016065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.016096] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.379 [2024-02-13 08:30:15.016120] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.379 [2024-02-13 08:30:15.016598] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.379 [2024-02-13 08:30:15.016731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.379 [2024-02-13 08:30:15.016740] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.379 [2024-02-13 08:30:15.016749] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.379 [2024-02-13 08:30:15.018428] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.379 [2024-02-13 08:30:15.027238] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.379 [2024-02-13 08:30:15.027690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.028063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.028095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.379 [2024-02-13 08:30:15.028117] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.379 [2024-02-13 08:30:15.028499] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.379 [2024-02-13 08:30:15.028659] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.379 [2024-02-13 08:30:15.028668] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.379 [2024-02-13 08:30:15.028675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.379 [2024-02-13 08:30:15.030523] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.379 [2024-02-13 08:30:15.039058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.379 [2024-02-13 08:30:15.039569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.039910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.039943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.379 [2024-02-13 08:30:15.039966] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.379 [2024-02-13 08:30:15.040343] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.379 [2024-02-13 08:30:15.040621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.379 [2024-02-13 08:30:15.040629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.379 [2024-02-13 08:30:15.040635] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.379 [2024-02-13 08:30:15.042283] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.379 [2024-02-13 08:30:15.050917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.379 [2024-02-13 08:30:15.051307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.051578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-02-13 08:30:15.051609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.379 [2024-02-13 08:30:15.051631] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.379 [2024-02-13 08:30:15.051964] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.379 [2024-02-13 08:30:15.052046] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.379 [2024-02-13 08:30:15.052054] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.379 [2024-02-13 08:30:15.052061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.379 [2024-02-13 08:30:15.053670] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.640 [2024-02-13 08:30:15.062996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.640 [2024-02-13 08:30:15.063344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.063632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.063643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.640 [2024-02-13 08:30:15.063654] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.640 [2024-02-13 08:30:15.063739] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.640 [2024-02-13 08:30:15.063837] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.640 [2024-02-13 08:30:15.063844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.640 [2024-02-13 08:30:15.063850] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.640 [2024-02-13 08:30:15.065595] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.640 [2024-02-13 08:30:15.074749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.640 [2024-02-13 08:30:15.075210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.075498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.075507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.640 [2024-02-13 08:30:15.075514] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.640 [2024-02-13 08:30:15.075631] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.640 [2024-02-13 08:30:15.075761] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.640 [2024-02-13 08:30:15.075769] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.640 [2024-02-13 08:30:15.075775] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.640 [2024-02-13 08:30:15.077661] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.640 [2024-02-13 08:30:15.086377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.640 [2024-02-13 08:30:15.086793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.087033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.087044] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.640 [2024-02-13 08:30:15.087050] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.640 [2024-02-13 08:30:15.087161] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.640 [2024-02-13 08:30:15.087284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.640 [2024-02-13 08:30:15.087292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.640 [2024-02-13 08:30:15.087298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.640 [2024-02-13 08:30:15.089092] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.640 [2024-02-13 08:30:15.098310] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.640 [2024-02-13 08:30:15.098701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.098939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.640 [2024-02-13 08:30:15.098950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.640 [2024-02-13 08:30:15.098956] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.640 [2024-02-13 08:30:15.099052] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.640 [2024-02-13 08:30:15.099176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.640 [2024-02-13 08:30:15.099184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.640 [2024-02-13 08:30:15.099189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.640 [2024-02-13 08:30:15.100872] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.640 [2024-02-13 08:30:15.110167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.640 [2024-02-13 08:30:15.110426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.110725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.110757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.110779] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.111258] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.111750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.111776] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.111797] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.113676] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.122009] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.122355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.122604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.122634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.122670] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.122951] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.123143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.123151] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.123157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.124884] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.133816] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.134312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.134670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.134704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.134727] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.134938] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.135036] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.135043] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.135049] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.136796] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.145972] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.146285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.146651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.146661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.146668] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.146752] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.146879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.146887] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.146893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.148682] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.157883] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.158363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.158660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.158671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.158678] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.158798] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.158949] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.158957] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.158963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.160741] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.169991] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.170499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.170845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.170859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.170866] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.170955] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.171091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.171098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.171105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.173081] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.182031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.182545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.182878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.182889] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.182896] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.183001] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.183121] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.183128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.183135] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.184882] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.193911] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.194411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.194819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.194852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.194874] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.195104] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.195217] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.195225] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.195231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.196964] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.205880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.206444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.206805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.641 [2024-02-13 08:30:15.206838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.641 [2024-02-13 08:30:15.206866] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.641 [2024-02-13 08:30:15.207036] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.641 [2024-02-13 08:30:15.207149] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.641 [2024-02-13 08:30:15.207156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.641 [2024-02-13 08:30:15.207162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.641 [2024-02-13 08:30:15.208932] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.641 [2024-02-13 08:30:15.217611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.641 [2024-02-13 08:30:15.218136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.218467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.218498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.218520] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.218913] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.219196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.219219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.219240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.220837] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.229492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.229976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.230382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.230413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.230434] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.230827] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.231067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.231075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.231081] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.232818] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.241244] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.241748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.242132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.242164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.242185] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.242523] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.242930] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.242957] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.242977] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.244866] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.252967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.253479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.253883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.253917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.253939] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.254164] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.254288] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.254295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.254301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.255908] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.264790] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.265280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.265631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.265641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.265653] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.265781] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.265923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.265931] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.265937] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.267673] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.276691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.277208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.277480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.277510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.277531] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.277717] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.277859] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.277867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.277873] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.279671] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.288684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.289185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.289590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.289621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.289643] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.289759] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.289869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.289877] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.289883] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.291413] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.300420] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.300907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.301290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.301322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.301345] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.301480] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.301584] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.301591] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.301597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.303333] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.312220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.312734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.313136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.313167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.642 [2024-02-13 08:30:15.313188] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.642 [2024-02-13 08:30:15.313428] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.642 [2024-02-13 08:30:15.313545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.642 [2024-02-13 08:30:15.313555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.642 [2024-02-13 08:30:15.313561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.642 [2024-02-13 08:30:15.315271] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.642 [2024-02-13 08:30:15.324191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.642 [2024-02-13 08:30:15.324623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.642 [2024-02-13 08:30:15.325000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.643 [2024-02-13 08:30:15.325011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.643 [2024-02-13 08:30:15.325018] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.643 [2024-02-13 08:30:15.325131] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.643 [2024-02-13 08:30:15.325244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.643 [2024-02-13 08:30:15.325252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.643 [2024-02-13 08:30:15.325258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.903 [2024-02-13 08:30:15.327033] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.903 [2024-02-13 08:30:15.335998] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.903 [2024-02-13 08:30:15.336504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-02-13 08:30:15.336889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-02-13 08:30:15.336924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.903 [2024-02-13 08:30:15.336946] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.337276] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.337543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.337550] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.337557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.339295] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.347859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.348387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.348716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.348750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.348774] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.349153] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.349427] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.349434] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.349443] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.351191] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.359737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.360154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.360508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.360539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.360561] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.360772] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.360868] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.360875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.360882] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.362654] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.371484] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.372016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.372377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.372408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.372430] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.372778] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.372922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.372929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.372935] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.374594] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.383335] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.383784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.384166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.384197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.384218] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.384599] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.384815] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.384823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.384829] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.386568] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.395286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.395756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.396114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.396145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.396167] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.396447] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.396632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.396639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.396645] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.398464] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.406989] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.407488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.407897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.407933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.407955] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.408137] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.408616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.408640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.408673] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.410335] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.418787] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.419307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.419723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.419734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.419741] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.419823] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.419918] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.419926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.419932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.421490] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.430605] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.431042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.431311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.431342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.904 [2024-02-13 08:30:15.431363] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.904 [2024-02-13 08:30:15.431755] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.904 [2024-02-13 08:30:15.431929] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.904 [2024-02-13 08:30:15.431941] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.904 [2024-02-13 08:30:15.431951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.904 [2024-02-13 08:30:15.434706] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.904 [2024-02-13 08:30:15.443333] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.904 [2024-02-13 08:30:15.443769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.444183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-02-13 08:30:15.444215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.444237] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.444566] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.444899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.444908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.444914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.446640] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.455298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.455832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.456196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.456226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.456248] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.456594] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.456720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.456728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.456734] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.458490] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.467315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.467730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.468141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.468172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.468194] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.468523] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.468677] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.468685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.468691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.470434] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.479152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.479673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.480129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.480160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.480181] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.480513] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.480907] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.480915] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.480921] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.482556] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.491094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.491556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.491985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.492018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.492040] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.492519] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.492859] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.492885] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.492905] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.495985] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.503688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.504178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.504607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.504645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.504682] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.504832] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.504952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.504960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.504967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.506865] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.515537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.516037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.516464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.516495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.516516] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.516908] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.517094] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.517102] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.517108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.518829] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.527391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.527881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.528223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.528254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.528275] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.528669] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.528954] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.528961] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.528967] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.530738] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.539252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.539767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.540194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.540225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.540253] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.905 [2024-02-13 08:30:15.540632] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.905 [2024-02-13 08:30:15.540856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.905 [2024-02-13 08:30:15.540864] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.905 [2024-02-13 08:30:15.540870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.905 [2024-02-13 08:30:15.542546] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.905 [2024-02-13 08:30:15.551118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.905 [2024-02-13 08:30:15.551601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.552044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-02-13 08:30:15.552076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.905 [2024-02-13 08:30:15.552099] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.906 [2024-02-13 08:30:15.552329] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.906 [2024-02-13 08:30:15.552700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.906 [2024-02-13 08:30:15.552708] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.906 [2024-02-13 08:30:15.552714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.906 [2024-02-13 08:30:15.554494] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.906 [2024-02-13 08:30:15.562946] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.906 [2024-02-13 08:30:15.563481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.563816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.563848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.906 [2024-02-13 08:30:15.563869] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.906 [2024-02-13 08:30:15.564020] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.906 [2024-02-13 08:30:15.564172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.906 [2024-02-13 08:30:15.564180] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.906 [2024-02-13 08:30:15.564186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.906 [2024-02-13 08:30:15.565891] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.906 [2024-02-13 08:30:15.574685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.906 [2024-02-13 08:30:15.575144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.575506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.575536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.906 [2024-02-13 08:30:15.575557] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.906 [2024-02-13 08:30:15.575840] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.906 [2024-02-13 08:30:15.575951] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.906 [2024-02-13 08:30:15.575958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.906 [2024-02-13 08:30:15.575964] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.906 [2024-02-13 08:30:15.577601] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:41.906 [2024-02-13 08:30:15.586657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:41.906 [2024-02-13 08:30:15.587173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.587566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.906 [2024-02-13 08:30:15.587597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:41.906 [2024-02-13 08:30:15.587619] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:41.906 [2024-02-13 08:30:15.588112] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:41.906 [2024-02-13 08:30:15.588211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:41.906 [2024-02-13 08:30:15.588219] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:41.906 [2024-02-13 08:30:15.588225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.167 [2024-02-13 08:30:15.589723] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.167 [2024-02-13 08:30:15.598611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.167 [2024-02-13 08:30:15.599126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.167 [2024-02-13 08:30:15.599554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.167 [2024-02-13 08:30:15.599584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.167 [2024-02-13 08:30:15.599606] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.167 [2024-02-13 08:30:15.599901] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.167 [2024-02-13 08:30:15.600090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.167 [2024-02-13 08:30:15.600097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.167 [2024-02-13 08:30:15.600104] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.167 [2024-02-13 08:30:15.601796] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2437978 Killed "${NVMF_APP[@]}" "$@" 00:29:42.167 08:30:15 -- host/bdevperf.sh@36 -- # tgt_init 00:29:42.167 08:30:15 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:42.167 08:30:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:42.167 08:30:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:42.167 08:30:15 -- common/autotest_common.sh@10 -- # set +x 00:29:42.167 [2024-02-13 08:30:15.610679] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.167 [2024-02-13 08:30:15.611139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.167 [2024-02-13 08:30:15.611516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.167 [2024-02-13 08:30:15.611530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.167 [2024-02-13 08:30:15.611537] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.167 [2024-02-13 08:30:15.611606] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.611738] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.611746] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.611753] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 08:30:15 -- nvmf/common.sh@469 -- # nvmfpid=2439779 00:29:42.168 08:30:15 -- nvmf/common.sh@470 -- # waitforlisten 2439779 00:29:42.168 08:30:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:42.168 08:30:15 -- common/autotest_common.sh@817 -- # '[' -z 2439779 ']' 00:29:42.168 08:30:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.168 08:30:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:42.168 [2024-02-13 08:30:15.613781] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 08:30:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.168 08:30:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:42.168 08:30:15 -- common/autotest_common.sh@10 -- # set +x 00:29:42.168 [2024-02-13 08:30:15.622651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.623096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.623474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.623485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.623492] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.623606] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.623739] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.623748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.623754] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.625511] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.634643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.635172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.635536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.635545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.635552] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.635672] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.635786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.635797] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.635803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.637728] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.646539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.647008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.647307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.647316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.647323] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.647405] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.647501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.647509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.647515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.649052] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.658296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.658809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.659169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.659179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.659186] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.659296] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.659420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.659427] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.659434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.659949] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:42.168 [2024-02-13 08:30:15.659990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.168 [2024-02-13 08:30:15.661165] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.670228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.670732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.671093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.671104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.671111] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.671178] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.671321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.671329] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.671335] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.673113] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.682104] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.682613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.682912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.682922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.682929] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.683053] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.683135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.683143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.683149] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.684803] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.168 [2024-02-13 08:30:15.694119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.694614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.694981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.694991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.694998] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.695108] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.168 [2024-02-13 08:30:15.695189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.168 [2024-02-13 08:30:15.695197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.168 [2024-02-13 08:30:15.695203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.168 [2024-02-13 08:30:15.696813] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.168 [2024-02-13 08:30:15.705964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.168 [2024-02-13 08:30:15.706485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.706702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.168 [2024-02-13 08:30:15.706713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.168 [2024-02-13 08:30:15.706720] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.168 [2024-02-13 08:30:15.706834] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.706947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.706958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.706965] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.708667] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.717885] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.718362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.718715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.718726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.718734] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.718859] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.718997] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.719006] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.719012] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.720799] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.722681] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.169 [2024-02-13 08:30:15.729756] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.730235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.730598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.730609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.730617] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.730749] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.730864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.730873] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.730880] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.732643] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.741763] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.742259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.742636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.742658] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.742666] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.742809] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.742952] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.742964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.742972] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.744747] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.753595] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.754117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.754387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.754398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.754405] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.754501] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.754598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.754607] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.754614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.756405] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.765412] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.765893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.766195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.766206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.766214] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.766298] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.766423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.766432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.766439] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.768079] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.777610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.778040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.778275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.778285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.778293] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.778436] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.778549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.778557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.778568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.780290] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.789721] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.790186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.790582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.790592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.790599] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.790699] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.790795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.790803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.790809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.169 [2024-02-13 08:30:15.792627] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.169 [2024-02-13 08:30:15.793477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:42.169 [2024-02-13 08:30:15.793578] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.169 [2024-02-13 08:30:15.793586] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.169 [2024-02-13 08:30:15.793593] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.169 [2024-02-13 08:30:15.793656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.169 [2024-02-13 08:30:15.793681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.169 [2024-02-13 08:30:15.793683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.169 [2024-02-13 08:30:15.801657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.169 [2024-02-13 08:30:15.802105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.802419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.169 [2024-02-13 08:30:15.802430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.169 [2024-02-13 08:30:15.802438] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.169 [2024-02-13 08:30:15.802540] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.169 [2024-02-13 08:30:15.802640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.169 [2024-02-13 08:30:15.802653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.169 [2024-02-13 08:30:15.802661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.170 [2024-02-13 08:30:15.804536] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.170 [2024-02-13 08:30:15.813777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.170 [2024-02-13 08:30:15.814282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.814658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.814669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.170 [2024-02-13 08:30:15.814684] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.170 [2024-02-13 08:30:15.814800] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.170 [2024-02-13 08:30:15.814899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.170 [2024-02-13 08:30:15.814908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.170 [2024-02-13 08:30:15.814916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.170 [2024-02-13 08:30:15.816749] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.170 [2024-02-13 08:30:15.825636] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.170 [2024-02-13 08:30:15.826148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.826536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.826547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.170 [2024-02-13 08:30:15.826555] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.170 [2024-02-13 08:30:15.826690] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.170 [2024-02-13 08:30:15.826805] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.170 [2024-02-13 08:30:15.826813] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.170 [2024-02-13 08:30:15.826820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.170 [2024-02-13 08:30:15.828608] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.170 [2024-02-13 08:30:15.837518] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.170 [2024-02-13 08:30:15.837968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.838341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.838352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.170 [2024-02-13 08:30:15.838360] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.170 [2024-02-13 08:30:15.838460] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.170 [2024-02-13 08:30:15.838588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.170 [2024-02-13 08:30:15.838597] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.170 [2024-02-13 08:30:15.838603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.170 [2024-02-13 08:30:15.840380] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.170 [2024-02-13 08:30:15.849384] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.170 [2024-02-13 08:30:15.849859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.850219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.170 [2024-02-13 08:30:15.850230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.170 [2024-02-13 08:30:15.850238] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.170 [2024-02-13 08:30:15.850359] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.170 [2024-02-13 08:30:15.850430] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.170 [2024-02-13 08:30:15.850440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.170 [2024-02-13 08:30:15.850447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.170 [2024-02-13 08:30:15.852222] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.431 [2024-02-13 08:30:15.861194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.431 [2024-02-13 08:30:15.861699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.862085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.862096] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.431 [2024-02-13 08:30:15.862103] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.431 [2024-02-13 08:30:15.862217] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.431 [2024-02-13 08:30:15.862345] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.431 [2024-02-13 08:30:15.862353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.431 [2024-02-13 08:30:15.862360] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.431 [2024-02-13 08:30:15.864205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.431 [2024-02-13 08:30:15.872984] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.431 [2024-02-13 08:30:15.873379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.873728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.873741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.431 [2024-02-13 08:30:15.873748] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.431 [2024-02-13 08:30:15.873865] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.431 [2024-02-13 08:30:15.873992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.431 [2024-02-13 08:30:15.874001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.431 [2024-02-13 08:30:15.874007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.431 [2024-02-13 08:30:15.875666] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.431 [2024-02-13 08:30:15.884840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.431 [2024-02-13 08:30:15.885250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.885612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.885623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.431 [2024-02-13 08:30:15.885630] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.431 [2024-02-13 08:30:15.885735] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.431 [2024-02-13 08:30:15.885867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.431 [2024-02-13 08:30:15.885880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.431 [2024-02-13 08:30:15.885886] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.431 [2024-02-13 08:30:15.887543] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.431 [2024-02-13 08:30:15.896835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.431 [2024-02-13 08:30:15.897328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.897596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.431 [2024-02-13 08:30:15.897606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.897613] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.897762] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.897846] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.897855] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.897861] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.899665] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.908832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.909305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.909665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.909676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.909684] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.909814] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.909913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.909921] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.909928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.911792] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.920629] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.921081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.921443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.921454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.921461] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.921575] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.921721] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.921732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.921738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.923480] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.932590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.933132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.933433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.933443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.933449] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.933577] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.933724] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.933733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.933739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.935509] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.944696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.945204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.945628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.945638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.945645] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.945719] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.945832] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.945840] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.945846] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.947443] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.956589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.956998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.957360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.957370] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.957377] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.957505] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.957618] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.957626] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.957636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.959454] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.968623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.969137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.969514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.969525] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.969532] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.969631] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.969748] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.969756] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.969763] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.971638] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.980759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.981246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.981644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.981658] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.981665] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.981793] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.981862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.981869] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.981876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.983531] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:15.992768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:15.993228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.993616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:15.993626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.432 [2024-02-13 08:30:15.993633] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.432 [2024-02-13 08:30:15.993750] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.432 [2024-02-13 08:30:15.993863] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.432 [2024-02-13 08:30:15.993872] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.432 [2024-02-13 08:30:15.993878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.432 [2024-02-13 08:30:15.995640] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.432 [2024-02-13 08:30:16.004698] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.432 [2024-02-13 08:30:16.005163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:16.005538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.432 [2024-02-13 08:30:16.005549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.005555] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.005702] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.005816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.005824] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.005830] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.007556] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.016684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.017155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.017505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.017515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.017521] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.017635] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.017766] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.017775] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.017781] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.019578] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.028549] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.029000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.029361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.029371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.029378] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.029477] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.029605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.029613] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.029619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.031449] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.040471] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.040924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.041300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.041311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.041318] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.041387] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.041501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.041508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.041514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.043217] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.052437] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.052850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.053235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.053245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.053252] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.053366] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.053508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.053516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.053522] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.055252] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.064411] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.064878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.065280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.065291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.065297] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.065425] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.065494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.065501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.065507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.067222] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.076439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.076832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.077179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.077190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.077196] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.077339] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.077482] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.077490] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.077496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.079212] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.088428] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.088931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.089318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.089328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.089335] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.089478] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.089590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.089599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.089605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.091305] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.100284] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.100768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.101161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.101172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.433 [2024-02-13 08:30:16.101179] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.433 [2024-02-13 08:30:16.101263] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.433 [2024-02-13 08:30:16.101391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.433 [2024-02-13 08:30:16.101399] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.433 [2024-02-13 08:30:16.101406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.433 [2024-02-13 08:30:16.103180] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.433 [2024-02-13 08:30:16.112234] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.433 [2024-02-13 08:30:16.112754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.433 [2024-02-13 08:30:16.113177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.434 [2024-02-13 08:30:16.113187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.434 [2024-02-13 08:30:16.113194] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.434 [2024-02-13 08:30:16.113308] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.434 [2024-02-13 08:30:16.113391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.434 [2024-02-13 08:30:16.113399] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.434 [2024-02-13 08:30:16.113405] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.434 [2024-02-13 08:30:16.115176] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.124200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.124678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.124974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.124985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.124991] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.125119] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.125246] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.125254] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.125260] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.127078] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.136378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.136941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.137254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.137264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.137271] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.137414] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.137511] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.137519] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.137526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.139256] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.148340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.148806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.149141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.149151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.149161] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.149289] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.149358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.149366] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.149372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.150913] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.160392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.160800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.161156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.161166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.161172] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.161271] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.161442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.161450] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.161455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.163171] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.172297] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.172769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.173125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.173136] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.173143] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.173257] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.173369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.173378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.173384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.175201] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.184380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.184873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.185252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.185262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.185269] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.185342] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.185484] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.185492] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.185498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.695 [2024-02-13 08:30:16.187286] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.695 [2024-02-13 08:30:16.196419] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.695 [2024-02-13 08:30:16.196936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.197217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.695 [2024-02-13 08:30:16.197228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.695 [2024-02-13 08:30:16.197234] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.695 [2024-02-13 08:30:16.197362] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.695 [2024-02-13 08:30:16.197475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.695 [2024-02-13 08:30:16.197483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.695 [2024-02-13 08:30:16.197489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.199150] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.208402] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.208822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.209192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.209202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.209209] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.209322] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.209391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.209398] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.209404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.211223] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.220450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.220979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.221343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.221353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.221360] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.221473] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.221621] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.221629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.221635] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.223424] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.232399] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.232820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.233103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.233113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.233120] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.233249] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.233361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.233371] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.233378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.235099] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.244620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.245020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.245389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.245400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.245406] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.245505] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.245632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.245640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.245651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.247393] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.256655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.257143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.257431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.257441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.257448] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.257547] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.257665] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.257677] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.257683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.259425] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.268527] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.268992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.269303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.269314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.269320] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.269404] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.269517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.269528] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.269534] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.271250] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.280572] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.281018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.281306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.281316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.281323] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.281436] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.281549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.281557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.281563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.283354] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.292508] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.292942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.293181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.293191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.293198] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.293326] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.293468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.293477] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.293486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.696 [2024-02-13 08:30:16.295290] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.696 [2024-02-13 08:30:16.304543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.696 [2024-02-13 08:30:16.304967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.305301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.696 [2024-02-13 08:30:16.305312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.696 [2024-02-13 08:30:16.305319] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.696 [2024-02-13 08:30:16.305420] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.696 [2024-02-13 08:30:16.305548] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.696 [2024-02-13 08:30:16.305557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.696 [2024-02-13 08:30:16.305563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.307384] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.316510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.316837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.317071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.317081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.317088] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.317217] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.317300] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.317308] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.317314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.319103] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.328374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.328782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.329101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.329111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.329118] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.329247] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.329389] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.329397] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.329403] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.331198] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.340178] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.340517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.340757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.340768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.340775] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.340889] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.341003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.341011] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.341018] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.342813] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.352060] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.352572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.352817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.352828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.352835] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.352934] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.353032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.353040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.353047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.354894] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.364039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.364454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.364712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.364724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.364731] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.364846] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.364974] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.364982] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.364989] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.366822] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.697 [2024-02-13 08:30:16.375996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.697 [2024-02-13 08:30:16.376499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.376840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.697 [2024-02-13 08:30:16.376852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.697 [2024-02-13 08:30:16.376859] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.697 [2024-02-13 08:30:16.376974] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.697 [2024-02-13 08:30:16.377073] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.697 [2024-02-13 08:30:16.377082] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.697 [2024-02-13 08:30:16.377088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.697 [2024-02-13 08:30:16.378745] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.958 [2024-02-13 08:30:16.388136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.958 [2024-02-13 08:30:16.388674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-02-13 08:30:16.389024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.389036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.389042] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.389171] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.389269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.389278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.389285] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.391176] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.399870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.402918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.403166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.403177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.403183] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.403312] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.403454] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.403463] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.403469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.405348] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.411743] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.412128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.412364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.412375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.412382] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.412481] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.412608] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.412617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.412623] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.414485] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.423645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.424007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.424204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.424215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.424223] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.424352] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.424494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.424502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.424510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.426198] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.435521] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.436057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.436424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.436434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.436441] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.436540] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.436638] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.436650] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.436657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.438298] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.447626] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.448022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.448382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.448393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.448400] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.448543] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.448660] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.448669] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.448675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.450534] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 08:30:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:42.959 08:30:16 -- common/autotest_common.sh@850 -- # return 0 00:29:42.959 08:30:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:42.959 [2024-02-13 08:30:16.459422] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 08:30:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:42.959 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.959 [2024-02-13 08:30:16.459881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.460158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.460168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.460175] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.460274] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.460358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.460365] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.460372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.462145] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.471328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.471809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.472050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.472060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.472067] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.472180] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.472278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.472286] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.472292] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.959 [2024-02-13 08:30:16.473950] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.959 [2024-02-13 08:30:16.483364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.959 [2024-02-13 08:30:16.483766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.484014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-02-13 08:30:16.484024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.959 [2024-02-13 08:30:16.484031] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.959 [2024-02-13 08:30:16.484145] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.959 [2024-02-13 08:30:16.484257] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.959 [2024-02-13 08:30:16.484266] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.959 [2024-02-13 08:30:16.484272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.485992] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 08:30:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.960 [2024-02-13 08:30:16.495274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 08:30:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.960 [2024-02-13 08:30:16.495705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 08:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.960 [2024-02-13 08:30:16.495947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.495958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.495968] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.960 [2024-02-13 08:30:16.496097] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.496227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.496235] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.496241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.498092] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 [2024-02-13 08:30:16.501712] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.960 08:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.960 08:30:16 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.960 08:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.960 [2024-02-13 08:30:16.507195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.960 [2024-02-13 08:30:16.507762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.508018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.508028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.508035] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.508120] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.508276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.508287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.508293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.509937] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 [2024-02-13 08:30:16.519242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 [2024-02-13 08:30:16.519758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.520006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.520017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.520024] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.520137] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.520265] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.520273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.520279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.522084] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 [2024-02-13 08:30:16.531208] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 [2024-02-13 08:30:16.531555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.531705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.531716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.531723] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.531822] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.531950] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.531958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.531964] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.533607] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 [2024-02-13 08:30:16.543199] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 [2024-02-13 08:30:16.543658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.544037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.544048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.544055] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.544169] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.544283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.544291] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.544302] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 Malloc0 00:29:42.960 [2024-02-13 08:30:16.546110] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 08:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.960 08:30:16 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.960 08:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.960 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.960 [2024-02-13 08:30:16.555220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 [2024-02-13 08:30:16.555716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.556008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.556018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.556025] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.556125] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.556224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.556231] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.556237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.557955] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 08:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.960 08:30:16 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.960 08:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.960 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.960 08:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.960 08:30:16 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.960 08:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.960 [2024-02-13 08:30:16.567099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.960 08:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.960 [2024-02-13 08:30:16.567430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.567743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-02-13 08:30:16.567755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be6630 with addr=10.0.0.2, port=4420 00:29:42.960 [2024-02-13 08:30:16.567763] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6630 is same with the state(5) to be set 00:29:42.960 [2024-02-13 08:30:16.567908] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be6630 (9): Bad file descriptor 00:29:42.960 [2024-02-13 08:30:16.567992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.960 [2024-02-13 08:30:16.568001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.960 [2024-02-13 08:30:16.568007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.960 [2024-02-13 08:30:16.569691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.960 [2024-02-13 08:30:16.569724] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.960 08:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.960 08:30:16 -- host/bdevperf.sh@38 -- # wait 2438822 00:29:42.960 [2024-02-13 08:30:16.579116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.220 [2024-02-13 08:30:16.689183] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:51.343 00:29:51.343 Latency(us) 00:29:51.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.343 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.343 Verification LBA range: start 0x0 length 0x4000 00:29:51.343 Nvme1n1 : 15.00 12559.92 49.06 19524.21 0.00 3977.95 1100.07 17476.27 00:29:51.343 =================================================================================================================== 00:29:51.343 Total : 12559.92 49.06 19524.21 0.00 3977.95 1100.07 17476.27 00:29:51.343 [2024-02-13 08:30:24.960747] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:51.602 08:30:25 -- host/bdevperf.sh@39 -- # sync 00:29:51.602 08:30:25 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.602 08:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:51.602 08:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.602 08:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:51.602 08:30:25 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:51.602 08:30:25 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:51.602 08:30:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:51.602 08:30:25 -- nvmf/common.sh@116 -- # sync 00:29:51.602 08:30:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:51.602 08:30:25 -- nvmf/common.sh@119 -- # set +e 00:29:51.602 08:30:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:51.602 08:30:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:51.602 rmmod nvme_tcp 00:29:51.602 rmmod nvme_fabrics 00:29:51.602 rmmod nvme_keyring 00:29:51.602 08:30:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:51.602 08:30:25 -- nvmf/common.sh@123 -- # set -e 00:29:51.602 08:30:25 -- nvmf/common.sh@124 -- # return 0 00:29:51.602 08:30:25 -- nvmf/common.sh@477 -- # '[' -n 2439779 ']' 00:29:51.602 08:30:25 -- nvmf/common.sh@478 -- # killprocess 2439779 00:29:51.602 08:30:25 -- common/autotest_common.sh@924 -- # '[' -z 2439779 ']' 00:29:51.602 08:30:25 -- common/autotest_common.sh@928 -- # kill -0 2439779 00:29:51.602 08:30:25 -- common/autotest_common.sh@929 -- # uname 00:29:51.602 08:30:25 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:51.602 08:30:25 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2439779 00:29:51.602 08:30:25 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:29:51.602 08:30:25 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:29:51.602 08:30:25 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2439779' 00:29:51.602 killing process with pid 2439779 00:29:51.602 08:30:25 -- common/autotest_common.sh@943 -- # kill 2439779 00:29:51.602 08:30:25 -- common/autotest_common.sh@948 -- # wait 2439779 00:29:51.901 08:30:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:51.901 08:30:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:51.901 08:30:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:51.901 08:30:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.901 08:30:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:51.901 08:30:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.901 08:30:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.902 08:30:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.441 08:30:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:54.441 00:29:54.441 real 0m26.603s 00:29:54.441 user 1m2.796s 00:29:54.441 sys 0m6.698s 00:29:54.441 08:30:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:54.441 08:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:54.441 ************************************ 00:29:54.441 END TEST nvmf_bdevperf 00:29:54.441 ************************************ 00:29:54.441 08:30:27 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:54.441 08:30:27 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:54.441 08:30:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:54.441 08:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:54.441 ************************************ 00:29:54.441 START TEST nvmf_target_disconnect 00:29:54.441 ************************************ 00:29:54.441 08:30:27 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:54.441 * Looking for test storage... 00:29:54.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:54.441 08:30:27 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.441 08:30:27 -- nvmf/common.sh@7 -- # uname -s 00:29:54.441 08:30:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.441 08:30:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.441 08:30:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.441 08:30:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.441 08:30:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.441 08:30:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.441 08:30:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.441 08:30:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.441 08:30:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.441 08:30:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.441 08:30:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:54.441 08:30:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:54.441 08:30:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.441 08:30:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.441 08:30:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.441 08:30:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.441 08:30:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.441 08:30:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.441 08:30:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.441 08:30:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.441 08:30:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.441 08:30:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.441 08:30:27 -- paths/export.sh@5 -- # export PATH 00:29:54.441 08:30:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.441 08:30:27 -- nvmf/common.sh@46 -- # : 0 00:29:54.442 08:30:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:54.442 08:30:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:54.442 08:30:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:54.442 08:30:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.442 08:30:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.442 08:30:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:54.442 08:30:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:54.442 08:30:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:54.442 08:30:27 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:54.442 08:30:27 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:54.442 08:30:27 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:54.442 08:30:27 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:54.442 08:30:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:54.442 08:30:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.442 08:30:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:54.442 08:30:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:54.442 08:30:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:54.442 08:30:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.442 08:30:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.442 08:30:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.442 08:30:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:54.442 08:30:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:54.442 08:30:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:54.442 08:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:59.718 08:30:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:59.718 08:30:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:59.718 08:30:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:59.718 08:30:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:59.718 08:30:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:59.718 08:30:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:59.718 08:30:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:59.718 08:30:33 -- nvmf/common.sh@294 -- # net_devs=() 00:29:59.718 08:30:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:59.718 08:30:33 -- nvmf/common.sh@295 -- # e810=() 00:29:59.718 08:30:33 -- nvmf/common.sh@295 -- # local -ga e810 00:29:59.718 08:30:33 -- nvmf/common.sh@296 -- # x722=() 00:29:59.718 08:30:33 -- nvmf/common.sh@296 -- # local -ga x722 00:29:59.718 08:30:33 -- nvmf/common.sh@297 -- # mlx=() 00:29:59.718 08:30:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:59.718 08:30:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.718 08:30:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:59.718 08:30:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:59.718 08:30:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.718 08:30:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.718 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.718 08:30:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:59.718 08:30:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.718 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.718 08:30:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.718 08:30:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.718 08:30:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.718 08:30:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.718 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.718 08:30:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.718 08:30:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:59.718 08:30:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.718 08:30:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.718 08:30:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.718 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.718 08:30:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.718 08:30:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:59.718 08:30:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:59.718 08:30:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:59.718 08:30:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.718 08:30:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.718 08:30:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.718 08:30:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:59.718 08:30:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.718 08:30:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.718 08:30:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:59.718 08:30:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.718 08:30:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.718 08:30:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:59.718 08:30:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:59.718 08:30:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.718 08:30:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.718 08:30:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.718 08:30:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.719 08:30:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:59.719 08:30:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.719 08:30:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.719 08:30:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.719 08:30:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:59.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:29:59.719 00:29:59.719 --- 10.0.0.2 ping statistics --- 00:29:59.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.719 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:59.719 08:30:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:29:59.719 00:29:59.719 --- 10.0.0.1 ping statistics --- 00:29:59.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.719 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:59.978 08:30:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.978 08:30:33 -- nvmf/common.sh@410 -- # return 0 00:29:59.978 08:30:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:59.978 08:30:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.978 08:30:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:59.978 08:30:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:59.978 08:30:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.978 08:30:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:59.978 08:30:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:59.978 08:30:33 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:59.978 08:30:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:59.978 08:30:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:59.978 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:29:59.978 ************************************ 00:29:59.978 START TEST nvmf_target_disconnect_tc1 00:29:59.978 ************************************ 00:29:59.978 08:30:33 -- common/autotest_common.sh@1102 -- # nvmf_target_disconnect_tc1 00:29:59.978 08:30:33 -- host/target_disconnect.sh@32 -- # set +e 00:29:59.978 08:30:33 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.978 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.978 [2024-02-13 08:30:33.528324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.978 [2024-02-13 08:30:33.528677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.978 [2024-02-13 08:30:33.528691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7be3e0 with addr=10.0.0.2, port=4420 00:29:59.978 [2024-02-13 08:30:33.528709] nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:59.978 [2024-02-13 08:30:33.528717] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.978 [2024-02-13 08:30:33.528723] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:59.978 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:59.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:59.978 Initializing NVMe Controllers 00:29:59.978 08:30:33 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:59.978 08:30:33 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:59.978 08:30:33 -- common/autotest_common.sh@1130 -- # [[ hxBET =~ e ]] 00:29:59.978 08:30:33 -- common/autotest_common.sh@1130 -- # return 0 00:29:59.978 08:30:33 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:59.978 08:30:33 -- host/target_disconnect.sh@41 -- # set -e 00:29:59.978 00:29:59.978 real 0m0.093s 00:29:59.978 user 0m0.038s 00:29:59.978 sys 0m0.055s 00:29:59.978 08:30:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:59.978 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:29:59.978 ************************************ 00:29:59.978 END TEST nvmf_target_disconnect_tc1 00:29:59.978 ************************************ 00:29:59.978 08:30:33 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:59.978 08:30:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:59.978 08:30:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:59.978 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:29:59.978 ************************************ 00:29:59.978 START TEST nvmf_target_disconnect_tc2 00:29:59.978 ************************************ 00:29:59.978 08:30:33 -- common/autotest_common.sh@1102 -- # nvmf_target_disconnect_tc2 00:29:59.978 08:30:33 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:59.978 08:30:33 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:59.978 08:30:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:59.978 08:30:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:59.978 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:29:59.978 08:30:33 -- nvmf/common.sh@469 -- # nvmfpid=2445144 00:29:59.978 08:30:33 -- nvmf/common.sh@470 -- # waitforlisten 2445144 00:29:59.978 08:30:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:59.978 08:30:33 -- common/autotest_common.sh@817 -- # '[' -z 2445144 ']' 00:29:59.978 08:30:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.978 08:30:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:59.978 08:30:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.978 08:30:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:59.978 08:30:33 -- common/autotest_common.sh@10 -- # set +x 00:29:59.978 [2024-02-13 08:30:33.624222] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:59.978 [2024-02-13 08:30:33.624263] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.978 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.238 [2024-02-13 08:30:33.697212] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.238 [2024-02-13 08:30:33.771705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:00.238 [2024-02-13 08:30:33.771810] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.238 [2024-02-13 08:30:33.771818] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.238 [2024-02-13 08:30:33.771824] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.238 [2024-02-13 08:30:33.772225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:00.238 [2024-02-13 08:30:33.772317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:00.238 [2024-02-13 08:30:33.772423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:00.238 [2024-02-13 08:30:33.772424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:00.807 08:30:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:00.807 08:30:34 -- common/autotest_common.sh@850 -- # return 0 00:30:00.807 08:30:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:00.807 08:30:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:00.807 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:00.807 08:30:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.807 08:30:34 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.807 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.807 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:00.807 Malloc0 00:30:00.807 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.807 08:30:34 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:00.807 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.807 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:00.807 [2024-02-13 08:30:34.490463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.066 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.066 08:30:34 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.066 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.066 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:01.066 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.066 08:30:34 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.066 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.066 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:01.066 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.066 08:30:34 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.066 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.066 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:01.066 [2024-02-13 08:30:34.519473] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.066 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.066 08:30:34 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.066 08:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:01.066 08:30:34 -- common/autotest_common.sh@10 -- # set +x 00:30:01.066 08:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:01.066 08:30:34 -- host/target_disconnect.sh@50 -- # reconnectpid=2445246 00:30:01.066 08:30:34 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:01.066 08:30:34 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.066 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.977 08:30:36 -- host/target_disconnect.sh@53 -- # kill -9 2445144 00:30:02.977 08:30:36 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:02.977 Read completed with error (sct=0, sc=8) 00:30:02.977 starting I/O failed 00:30:02.977 Read completed with error (sct=0, sc=8) 00:30:02.977 starting I/O failed 00:30:02.977 Read completed with error (sct=0, sc=8) 00:30:02.977 starting I/O failed 00:30:02.977 Read completed with error (sct=0, sc=8) 00:30:02.977 starting I/O failed 00:30:02.977 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 [2024-02-13 08:30:36.552576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 [2024-02-13 08:30:36.552797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 [2024-02-13 08:30:36.552995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Write completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.978 starting I/O failed 00:30:02.978 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Read completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 Write completed with error (sct=0, sc=8) 00:30:02.979 starting I/O failed 00:30:02.979 [2024-02-13 08:30:36.553196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.979 [2024-02-13 08:30:36.553509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.553770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.553782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.554014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.554241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.554251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.554482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.554784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.554815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.555083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.555332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.555341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.555573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.555791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.555801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.556006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.556277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.556287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.556515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.556809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.556840] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.557112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.557389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.557419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.557761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.558011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.558049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.558370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.558723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.558753] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.559088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.559360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.559393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.559730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.559972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.560003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.560322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.560629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.560643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.560891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.561192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.561221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.561477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.561724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.561754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.562007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.562312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.562342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.562600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.562829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.562844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.563131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.563401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.563416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.563689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.564069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.564088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.564437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.564690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.564720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.564979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.565297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.565326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.565638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.565813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.565843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.566118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.566428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.566457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.979 qpair failed and we were unable to recover it. 00:30:02.979 [2024-02-13 08:30:36.566790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.979 [2024-02-13 08:30:36.567034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.567064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.567455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.567692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.567707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.568126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.568426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.568440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.568665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.568909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.568945] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.569283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.569627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.569663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.570044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.570298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.570313] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.570539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.570814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.570843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.571110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.571428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.571458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.571811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.572116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.572145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.572416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.572667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.572697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.572970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.573301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.573330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.573592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.574679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.574710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.574987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.575271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.575286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.575519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.575753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.575784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.575994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.576296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.576325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.576665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.576975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.577005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.577261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.577577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.577606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.577857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.578173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.578202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.578469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.578783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.578813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.579196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.579570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.579598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.579860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.580131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.580160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.580425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.580731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.580761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.581065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.581220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.581250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.581555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.581877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.581907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.582222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.582474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.582502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.582754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.583080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.583109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.583479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.583810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.583840] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.980 [2024-02-13 08:30:36.584168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.584513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.980 [2024-02-13 08:30:36.584542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.980 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.584916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.585070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.585099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.585375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.585634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.585653] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.585953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.586177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.586191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.586475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.586710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.586725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.587022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.587358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.587373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.587672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.587965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.587980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.588292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.588509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.588523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.588829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.589047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.589062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.589357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.589574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.589588] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.589837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.590119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.590134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.590420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.590780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.590811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.591143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.591465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.591494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.591745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.592068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.592105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.592390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.592601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.592616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.592973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.593189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.593204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.593422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.593633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.593652] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.593941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.594215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.594230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.594475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.594749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.594766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.594981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.595210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.595225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.595501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.595773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.595788] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.595993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.596271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.596286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.596560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.596912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.596942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.597257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.597503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.597533] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.597850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.598224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.598253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.598580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.598886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.598917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.599175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.599421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.599450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.599695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.599990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.600025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.981 qpair failed and we were unable to recover it. 00:30:02.981 [2024-02-13 08:30:36.600283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.981 [2024-02-13 08:30:36.600594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.600623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.600967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.601273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.601303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.601632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.601949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.601978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.602300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.602551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.602580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.602828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.603076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.603105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.603415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.603671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.603701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.604125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.604431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.604460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.604775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.605096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.605125] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.605524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.605830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.605860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.606098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.606251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.606281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.606590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.606798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.606813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.607029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.607308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.607337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.607990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.608019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.608336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.608584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.608613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.608858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.609110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.609138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.609373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.609682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.609712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.610082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.610331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.610360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.610603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.610897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.610927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.611163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.611485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.982 [2024-02-13 08:30:36.611513] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.982 qpair failed and we were unable to recover it. 00:30:02.982 [2024-02-13 08:30:36.611814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.612067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.612097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.612356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.612661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.612692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.612938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.613243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.613272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.613515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.613760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.613789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.614030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.614330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.614371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.614603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.614874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.614890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.615132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.615341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.615355] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.615585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.615799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.615814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.616040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.616263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.616278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.616575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.616882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.616896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.617141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.617448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.617477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.617741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.618034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.618049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.618266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.618498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.618528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.618774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.619011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.619041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.619349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.619703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.619734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.620062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.620376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.620405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.620676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.620926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.620956] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.621269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.621594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.621623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.622004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.622290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.622319] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.622583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.622914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.622943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.623263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.623585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.623614] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.623876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.624253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.624282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.624629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.624953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.624982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.625238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.625576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.625591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.625893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.626172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.626202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.626524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.626829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.626859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.983 [2024-02-13 08:30:36.627178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.627477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.983 [2024-02-13 08:30:36.627506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.983 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.627744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.628154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.628183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.628526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.628933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.628963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.629346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.629708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.629723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.629990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.630213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.630227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.630510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.630724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.630739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.631059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.631367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.631396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.631794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.632188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.632217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.632583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.632796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.632825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.633147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.633396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.633425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.633729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.634029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.634058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.634455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.634710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.634739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.635008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.635375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.635404] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.635740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.635976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.636005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.636372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.636755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.636784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.637157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.637530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.637558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.637877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.638218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.638248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.638567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.638941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.638970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.639290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.639594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.639623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.639883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.640281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.640311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.640702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.641094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.641123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.641525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.641914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.641944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.642269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.642521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.642549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.642950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.643266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.643295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.643621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.643880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.643910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.644326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.644627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.644662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.644970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.645170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.645203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.645591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.645901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.984 [2024-02-13 08:30:36.645931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.984 qpair failed and we were unable to recover it. 00:30:02.984 [2024-02-13 08:30:36.646322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.646712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.646742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.647118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.647482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.647511] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.647839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.647990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.648019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.648407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.648743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.648757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.649102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.649373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.649403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.649788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.650124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.650138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.650497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.650766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.650796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.651102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.651419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.651447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.651752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.651985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.652001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.652292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.652661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.652690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.652946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.653280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.653309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.653673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.653878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.653908] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.654168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.654482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.654511] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.654926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.655225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.655239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.655521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.655840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.655869] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.656169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.656416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.656430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.656714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.657081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.657095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.657387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.657730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.657760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.658158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.658586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.658603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:02.985 [2024-02-13 08:30:36.658905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.659271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.985 [2024-02-13 08:30:36.659285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:02.985 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.659576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.659848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.659863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.660154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.660380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.660394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.660751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.661029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.661058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.661448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.661811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.661826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.662078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.662445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.662474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.253 qpair failed and we were unable to recover it. 00:30:03.253 [2024-02-13 08:30:36.662855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.253 [2024-02-13 08:30:36.663168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.663197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.663529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.663925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.663954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.664279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.664596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.664611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.664960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.665277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.665315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.665712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.666099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.666128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.666436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.666795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.666810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.667091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.667424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.667453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.667850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.668175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.668204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.668549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.668732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.668762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.669079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.669462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.669491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.669897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.670128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.670157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.670477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.670799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.670829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.671149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.671516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.671544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.671885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.672193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.672221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.672554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.672949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.672979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.673302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.673607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.673622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.673921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.674146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.674160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.674401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.674634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.674660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.674839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.675161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.675175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.675377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.675666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.675696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.676024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.676414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.676442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.676773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.677162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.677191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.677557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.677767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.677781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.678119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.678480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.678509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.678827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.679242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.679271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.679604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.679866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.679896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.680211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.680505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.680520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.680897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.681246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.254 [2024-02-13 08:30:36.681275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.254 qpair failed and we were unable to recover it. 00:30:03.254 [2024-02-13 08:30:36.681536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.681837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.681867] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.682488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.682517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.682757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.683046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.683061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.683344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.683627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.683642] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.684039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.684401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.684430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.684780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.685146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.685175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.685505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.685877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.685891] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.686097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.686465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.686494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.686730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.686976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.687006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.687397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.687784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.687815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.688126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.688441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.688470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.688783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.689055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.689070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.689387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.689731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.689760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.690074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.690403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.690432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.690777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.691086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.691115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.691377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.691770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.691799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.692151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.692322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.692336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.692627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.692886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.692917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.693123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.693514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.693542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.693943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.694310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.694339] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.694657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.694875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.694890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.695233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.695526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.695555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.695859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.696169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.696199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.696588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.696981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.696996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.697318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.697710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.697739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.698071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.698447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.698476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.698857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.699235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.699264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.255 qpair failed and we were unable to recover it. 00:30:03.255 [2024-02-13 08:30:36.699515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.699898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.255 [2024-02-13 08:30:36.699913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.700197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.700428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.700443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.700727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.701083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.701098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.701325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.701630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.701669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.702039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.702451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.702480] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.702725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.703010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.703039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.703342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.703686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.703715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.704031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.704399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.704428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.704811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.705133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.705162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.705556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.705872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.705886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.706124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.706361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.706390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.706776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.707032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.707061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.707269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.707666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.707695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.707946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.708348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.708376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.708759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.709005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.709019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.709384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.709677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.709707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.710079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.710274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.710304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.710612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.710983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.710997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.711229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.711509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.711523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.711801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.712146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.712175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.712578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.712895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.712925] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.713189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.713522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.713551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.713855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.714203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.714233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.714566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.714957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.714987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.715303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.715642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.715680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.716079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.716381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.716409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.716746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.717060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.717090] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.717457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.717841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.717871] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.256 qpair failed and we were unable to recover it. 00:30:03.256 [2024-02-13 08:30:36.718199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.256 [2024-02-13 08:30:36.718451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.718479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.718853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.719162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.719192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.719499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.719817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.719847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.720158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.720449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.720463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.720801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.721110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.721139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.721483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.721791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.721806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.722147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.722457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.722486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.722805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.723061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.723090] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.723459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.723724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.723754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.724103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.724468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.724497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.724816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.725217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.725246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.725657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.725969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.725998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.726364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.726777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.726807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.727125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.727371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.727400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.727707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.728075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.728104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.728433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.728844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.728874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.729043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.729356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.729385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.729800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.730113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.730142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.730404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.730766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.730781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.731070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.731408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.731437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.731804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.732087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.732116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.732436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.732698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.732713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.732999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.733270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.733284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.733585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.733891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.733921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.734221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.734357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.734386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.734657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.734992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.735007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.735351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.735675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.735690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.257 [2024-02-13 08:30:36.736038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.736270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.257 [2024-02-13 08:30:36.736284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.257 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.736390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.736663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.736678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.737018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.737382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.737411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.737734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.738041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.738070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.738442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.738863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.738893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.739302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.739554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.739583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.739980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.740320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.740349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.740664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.740955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.740970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.741257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.741645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.741683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.742001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.742286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.742300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.742640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.742890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.742920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.743220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.743585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.743614] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.743932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.744296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.744325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.744718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.745021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.745050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.745370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.745687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.745704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.745998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.746336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.746350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.746740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.746941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.746970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.747241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.747615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.747644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.747983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.748315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.748344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.748667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.748993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.749022] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.749442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.749856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.749886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.750280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.750671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.750700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.258 qpair failed and we were unable to recover it. 00:30:03.258 [2024-02-13 08:30:36.751095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.258 [2024-02-13 08:30:36.751353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.751382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.751726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.751974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.752003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.752402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.752672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.752690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.752964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.753254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.753282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.753664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.754031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.754060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.754367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.754670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.754699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.755025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.755409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.755439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.755775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.756116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.756145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.756462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.756804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.756819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.757150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.757522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.757551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.757923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.758169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.758197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.758454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.758838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.758888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.759262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.759635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.759677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.760081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.760420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.760449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.760844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.761159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.761196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.761567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.761822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.761852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.762215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.762543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.762572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.762968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.763338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.763367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.763760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.764127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.764141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.764491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.764879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.764909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.765225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.765524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.765553] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.765873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.766191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.766220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.766535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.766837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.766876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.767271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.767584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.767613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.768046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.768410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.768438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.768830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.769089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.769118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.769435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.769803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.769818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.259 [2024-02-13 08:30:36.770110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.770394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.259 [2024-02-13 08:30:36.770409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.259 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.770804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.771118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.771147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.771543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.771913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.771943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.772358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.772726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.772756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.773154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.773468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.773497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.773812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.774069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.774099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.774414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.774719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.774749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.775069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.775376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.775406] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.775611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.775946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.775976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.776368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.776689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.776719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.777041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.777306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.777335] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.777577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.777964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.777979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.778271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.778637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.778674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.779043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.779350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.779380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.779696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.780007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.780036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.780301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.780531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.780560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.780964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.781205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.781235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.781538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.781854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.781884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.782253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.782536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.782550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.782835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.783217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.783231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.783570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.783917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.783946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.784270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.784602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.784631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.785063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.785375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.785404] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.785773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.786161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.786190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.786583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.786881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.786911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.787225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.787538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.787567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.787877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.788191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.788220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.788479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.788789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.788804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.260 qpair failed and we were unable to recover it. 00:30:03.260 [2024-02-13 08:30:36.789026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.260 [2024-02-13 08:30:36.789331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.789345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.789574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.789785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.789799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.790041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.790342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.790371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.790708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.791000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.791014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.791232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.791577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.791606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.791994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.792328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.792357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.792735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.793099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.793129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.793497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.793883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.793913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.794309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.794466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.794494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.794798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.795134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.795164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.795440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.795803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.795818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.796135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.796519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.796548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.796887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.797095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.797124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.797428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.797819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.797850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.798266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.798588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.798616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.798954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.799254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.799283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.799586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.799901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.799916] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.800151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.800487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.800516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.800886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.801275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.801304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.801552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.801894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.801923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.802320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.802662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.802692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.803038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.803381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.803411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.803790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.804053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.804067] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.804298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.804586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.804600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.804916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.805255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.805284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.805595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.805842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.805872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.806116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.806387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.806416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.261 qpair failed and we were unable to recover it. 00:30:03.261 [2024-02-13 08:30:36.806744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.807091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.261 [2024-02-13 08:30:36.807106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.807554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.807779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.807794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.808076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.808345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.808360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.808737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.809047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.809075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.809389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.809660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.809689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.810013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.810382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.810411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.810683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.811000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.811040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.811326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.811599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.811613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.811934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.812165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.812180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.812456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.812747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.812761] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.813085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.813374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.813389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.813762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.814090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.814105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.814446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.814818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.814848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.815172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.815582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.815611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.815866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.816203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.816218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.816510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.816767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.816797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.817172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.817487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.817515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.817890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.818267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.818296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.818608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.818813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.818827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.819167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.819445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.819459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.819848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.820136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.820165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.820487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.820744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.820759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.820989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.821279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.821308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.821577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.821882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.821896] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.822125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.822482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.822497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.822786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.823160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.823189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.823441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.823830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.823844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.824073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.824424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.824438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.262 qpair failed and we were unable to recover it. 00:30:03.262 [2024-02-13 08:30:36.824755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.262 [2024-02-13 08:30:36.825050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.825064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.825302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.825599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.825628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.825963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.826354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.826383] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.826758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.827165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.827203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.827473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.827794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.827823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.828197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.828440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.828469] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.828845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.829173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.829202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.829598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.829969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.829999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.830274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.830591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.830620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.830933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.831301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.831316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.831657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.831979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.832008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.832339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.832640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.832691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.833027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.833333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.833362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.833661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.833948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.833978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.834332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.834718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.834748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.835007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.835254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.835283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.835598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.835928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.835943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.836231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.836505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.836519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.836801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.837202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.837232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.837482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.837900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.837930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.838322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.838689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.838719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.839102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.839517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.839546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.839865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.840092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.840107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.263 qpair failed and we were unable to recover it. 00:30:03.263 [2024-02-13 08:30:36.840353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.840639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.263 [2024-02-13 08:30:36.840663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.840945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.841165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.841179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.841401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.841682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.841712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.842039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.842342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.842356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.842577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.842848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.842862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.843242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.843622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.843657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.844040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.844434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.844463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.844866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.845196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.845225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.845566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.845937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.845967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.846241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.846658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.846687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.847005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.847297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.847326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.847586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.847896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.847911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.848216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.848545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.848560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.848794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.849083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.849098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.849296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.849660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.849675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.850051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.850366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.850395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.850793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.851047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.851062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.851351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.851539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.851553] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.851827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.852191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.852220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.852625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.853007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.853037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.853397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.853715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.853750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.854064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.854211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.854240] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.854612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.854921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.854950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.855270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.855664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.855694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.856011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.856301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.856315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.856532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.856802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.856817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.857172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.857445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.857474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.857869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.858263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.858292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.264 qpair failed and we were unable to recover it. 00:30:03.264 [2024-02-13 08:30:36.858559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.264 [2024-02-13 08:30:36.858947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.858977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.859374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.859749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.859764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.860050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.860380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.860413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.860792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.861257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.861271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.861604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.861972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.861987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.862274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.862546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.862575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.862882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.863138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.863167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.863491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.863816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.863846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.864151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.864468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.864497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.864659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.864972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.865001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.865397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.865683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.865713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.866032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.866418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.866447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.866802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.867188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.867226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.867543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.867943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.867974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.868355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.868662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.868692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.868900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.869297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.869326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.869676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.870023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.870052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.870430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.870760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.870790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.871052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.871345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.871374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.871714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.871919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.871948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.872287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.872662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.872692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.873048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.873383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.873412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.873737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.874155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.874189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.874494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.874909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.874939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.875299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.875619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.875669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.875857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.876114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.876128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.876470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.876756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.876771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.265 [2024-02-13 08:30:36.877062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.877288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.265 [2024-02-13 08:30:36.877318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.265 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.877634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.878080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.878108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.878423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.878815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.878846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.879237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.879630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.879667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.879939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.880328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.880343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.880564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.880797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.880812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.881165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.881392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.881407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.881691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.881985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.882014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.882329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.882722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.882751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.883068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.883381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.883410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.883825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.884137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.884151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.884442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.884791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.884822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.885159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.885547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.885577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.885892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.886288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.886316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.886621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.886969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.886998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.887369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.887749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.887778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.888180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.888499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.888528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.888852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.889114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.889143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.889512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.889825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.889854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.890185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.890541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.890570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.890969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.891233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.891262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.891586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.891836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.891866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.892121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.892456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.892485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.892912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.893324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.893338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.893655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.893871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.893886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.894203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.894564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.894579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.894820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.895043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.895072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.895336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.895701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.895731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.266 qpair failed and we were unable to recover it. 00:30:03.266 [2024-02-13 08:30:36.896138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.896478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.266 [2024-02-13 08:30:36.896507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.896907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.897219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.897234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.897555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.897841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.897878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.898113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.898516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.898545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.898913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.899242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.899271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.899518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.899833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.899863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.900259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.900561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.900590] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.900848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.901213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.901242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.901498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.901743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.901774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.902079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.902442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.902470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.902821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.903124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.903153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.903559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.903878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.903907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.904295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.904672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.904702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.905006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.905270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.905299] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.905639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.906041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.906071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.906477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.906836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.906866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.907125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.907464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.907493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.907820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.908187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.908216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.908564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.908887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.908917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.909240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.909625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.909660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.909983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.910351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.910381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.910683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.911051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.911081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.911414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.911757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.911771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.911947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.912294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.912323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.912699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.913011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.913040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.913359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.913582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.913596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.913960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.914181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.914196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.914503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.914802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.914817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.267 qpair failed and we were unable to recover it. 00:30:03.267 [2024-02-13 08:30:36.915100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.267 [2024-02-13 08:30:36.915465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.915495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.915889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.916274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.916289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.916582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.916757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.916787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.917179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.917514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.917543] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.917908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.918235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.918264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.918641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.919041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.919070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.919460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.919857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.919886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.920200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.920534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.920549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.920843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.921146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.921175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.921487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.921853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.921883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.922152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.922528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.922557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.922901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.923220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.923249] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.923590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.923909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.923939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.924333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.924722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.924752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.925125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.925433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.925462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.925763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.926154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.926183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.926497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.926829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.926859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.927272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.927494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.927509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.927748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.928057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.928071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.928340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.928613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.928628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.928869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.929208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.929222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.929339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.929676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.929690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.929995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.930362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.930391] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.268 [2024-02-13 08:30:36.930725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.930977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-02-13 08:30:36.931007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.268 qpair failed and we were unable to recover it. 00:30:03.269 [2024-02-13 08:30:36.931242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-02-13 08:30:36.931474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-02-13 08:30:36.931503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.269 qpair failed and we were unable to recover it. 00:30:03.269 [2024-02-13 08:30:36.931814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.932237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.932251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.932548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.932886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.932901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.933107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.933388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.933402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.933689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.933972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.933987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.934330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.934654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.934683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.935073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.935415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.935444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.935841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.936001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.936016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.936266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.936660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.936690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.936996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.937310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.937340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.937676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.937989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.938018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.938339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.938707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.938736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.939083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.939479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.939508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.939788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.940102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.940143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.940455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.940838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.940853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.536 qpair failed and we were unable to recover it. 00:30:03.536 [2024-02-13 08:30:36.941170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.941550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.536 [2024-02-13 08:30:36.941579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.941894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.942268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.942310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.942700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.943017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.943046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.943357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.943757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.943787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.943944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.944206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.944235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.944603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.945002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.945032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.945349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.945620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.945634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.945979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.946288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.946317] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.946620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.946953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.946982] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.947299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.947666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.947696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.948076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.948441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.948470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.948847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.949000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.949028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.949326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.949610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.949624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.949969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.950250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.950285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.950626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.950995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.951025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.951420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.951733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.951763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.952157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.952412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.952441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.952832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.953210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.953240] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.953604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.953921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.953951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.954274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.954614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.954643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.954863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.955169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.955197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.955587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.955909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.955939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.956191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.956494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.956523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.956913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.957229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.957243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.957581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.957917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.957932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.958222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.958579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.958594] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.958936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.959275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.959303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.959612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.959990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.960019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.960352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.960692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.960722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.961105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.961492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.961521] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.961937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.962262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.962292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.962608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.962979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.963014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.963347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.963662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.963692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.964022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.964386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.964415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.964731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.965099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.965128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.965455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.965657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.965672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.965948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.966320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.966349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.966675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.967022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.967052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.967378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.967633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.967669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.968053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.968307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.968336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.968598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.968897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.537 [2024-02-13 08:30:36.968912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.537 qpair failed and we were unable to recover it. 00:30:03.537 [2024-02-13 08:30:36.969184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.969471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.969506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.969840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.970235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.970264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.970600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.971022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.971052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.971398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.971758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.971787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.972180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.972545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.972574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.972944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.973255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.973284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.973605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.973860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.973890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.974205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.974401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.974416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.974753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.975036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.975050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.975275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.975487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.975501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.975866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.976236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.976270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.976518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.976805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.976835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.977202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.977500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.977529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.977850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.978168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.978198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.978563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.978940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.978970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.979346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.979723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.979752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.980140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.980453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.980482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.980625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.980978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.981008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.981261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.981549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.981579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.981850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.982153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.982167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.982452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.982809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.982827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.983066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.983347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.983362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.983595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.983888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.983916] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.984173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.984537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.984566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.984894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.985204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.985233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.985438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.985748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.985778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.986106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.986415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.986444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.986799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.987194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.987223] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.987488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.987784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.987821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.988164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.988550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.988579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.988999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.989214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.989243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.989558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.989945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.989975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.990226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.990505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.990519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.990836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.991231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.991261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.991536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.991880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.991910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.992348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.992735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.992765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.993134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.993437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.993465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.993783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.993930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.993960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.994300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.994686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.994716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.995093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.995403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.995432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.538 qpair failed and we were unable to recover it. 00:30:03.538 [2024-02-13 08:30:36.995755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.538 [2024-02-13 08:30:36.996144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.996173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:36.996507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.996893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.996923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:36.997211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.997640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.997677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:36.998013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.998390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.998408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:36.998776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.999072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.999086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:36.999320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.999619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:36.999662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.000064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.000457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.000486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.000801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.001163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.001177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.001456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.001837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.001868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.002134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.002475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.002504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.002838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.003154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.003183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.003578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.003879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.003910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.004225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.004564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.004593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.004975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.005290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.005320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.005717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.006051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.006081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.006470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.006786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.006817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.007139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.007456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.007485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.007788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.008090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.008120] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.008493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.008808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.008839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.009115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.009477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.009506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.009820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.010134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.010164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.010562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.010868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.010900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.011231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.011573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.011603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.011962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.012282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.012312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.012477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.012783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.012814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.013209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.013601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.013630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.014015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.014285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.014299] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.014530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.014839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.014870] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.015261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.015510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.015539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.015779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.016171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.016201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.016525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.016786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.016820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.017094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.017459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.017488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.017861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.018254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.018284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.018611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.019024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.019055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.019269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.019633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.019678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.539 qpair failed and we were unable to recover it. 00:30:03.539 [2024-02-13 08:30:37.020057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.539 [2024-02-13 08:30:37.020423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.020451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.020796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.021165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.021195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.021515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.021822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.021852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.022173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.022565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.022594] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.022861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.023254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.023284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.023551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.023924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.023955] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.024379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.024633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.024692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.025046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.025413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.025442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.025808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.026170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.026185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.026423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.026749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.026778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.027058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.027380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.027409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.027802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.028116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.028145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.028520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.028827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.028858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.029119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.029432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.029461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.029853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.030166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.030195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.030443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.030825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.030839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.031073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.031357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.031386] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.031734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.032104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.032133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.032457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.032770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.032800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.033191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.033454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.033483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.033857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.034227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.034256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.034594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.034919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.034949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.035369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.035720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.035750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.036127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.036490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.036519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.036852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.037218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.037248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.037618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.038009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.038039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.038418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.038742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.038773] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.039038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.039177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.039191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.039469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.039830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.039844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.040159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.040362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.040392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.040713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.041082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.041111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.041491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.041802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.041832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.042210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.042525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.042554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.042803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.043129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.043159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.043549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.043860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.043890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.044154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.044536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.044565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.044870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.045180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.045218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.045432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.045769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.045800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.046177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.046429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.046458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.046825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.047132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.540 [2024-02-13 08:30:37.047162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.540 qpair failed and we were unable to recover it. 00:30:03.540 [2024-02-13 08:30:37.047486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.047811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.047841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.048158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.048379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.048409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.048689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.049080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.049110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.049362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.049701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.049716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.050018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.050322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.050336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.050761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.051129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.051159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.051473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.051839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.051854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.052148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.052400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.052429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.052796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.053169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.053198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.053468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.053787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.053802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.054105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.054354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.054369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.054551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.054913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.054944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.055297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.055698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.056046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.056525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.056555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.056819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.057147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.057177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.057567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.057831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.057861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.058176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.058572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.058602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.058871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.059265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.059294] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.059611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.059870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.059901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.060233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.060578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.060592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.060879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.061245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.061274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.061543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.061873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.061903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.062233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.062554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.062583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.062735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.063036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.063065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.063267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.063576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.063605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.064015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.064372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.064402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.064770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.065125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.065159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.065544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.065876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.065906] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.066227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.066609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.066638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.066991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.067304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.067334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.067601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.068000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.068029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.068301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.068602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.068631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.068957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.069286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.069316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.069631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.069928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.069957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.070337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.070783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.070832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.071150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.071407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.071436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.071751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.071986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.072006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.072368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.072686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.072716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.073040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.073304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.073333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.073645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.073979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.074008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.541 qpair failed and we were unable to recover it. 00:30:03.541 [2024-02-13 08:30:37.074379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.541 [2024-02-13 08:30:37.074687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.074717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.074971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.075284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.075314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.075561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.075931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.075961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.076220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.076473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.076487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.076792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.077154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.077169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.077444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.077755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.077786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.078045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.078381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.078415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.078730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.079042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.079071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.079393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.079736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.079751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.080115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.080379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.080409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.080723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.081091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.081121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.081472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.081702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.081733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.082150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.082542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.082572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.082834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.083126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.083141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.083506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.083751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.083765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.084085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.084394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.084423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.084759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.084974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.084991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.085295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.085525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.085540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.085843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.086137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.086152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.086517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.086748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.086762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.087102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.087389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.087418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.087790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.088099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.088128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.088530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.088841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.088870] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.089123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.089443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.089473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.089762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.090045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.090060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.090406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.090776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.090810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.091118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.091490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.091519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.091833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.092175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.092204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.092538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.092808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.092838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.093161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.093527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.093556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.093894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.094229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.094258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.094641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.094913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.094943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.095250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.095577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.095610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.096047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.096469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.096498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.096815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.097064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.097079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.097369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.097663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.097678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.098044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.098385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.098415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.098689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.099054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.099083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.099410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.099808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.099837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.100149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.100415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.100444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.100778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.101074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.101103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.101372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.101672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.101703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.542 qpair failed and we were unable to recover it. 00:30:03.542 [2024-02-13 08:30:37.101974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.542 [2024-02-13 08:30:37.102246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.102276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.102537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.102867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.102897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.103306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.103560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.103575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.103859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.104083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.104113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.104441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.104692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.104722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.105056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.105449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.105479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.105826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.106154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.106183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.106580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.106892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.106922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.107244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.107566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.107608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.107869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.108246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.108275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.108680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.109050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.109079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.109421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.109827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.109858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.110212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.110562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.110591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.110998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.111297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.111326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.111630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.111957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.111987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.112257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.112555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.112569] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.112851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.113126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.113140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.113431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.113717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.113747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.114152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.114545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.114559] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.114934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.115234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.115262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.115575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.115879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.115893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.116164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.116455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.116470] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.116683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.117059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.117074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.117352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.117698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.117729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.118048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.118376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.118405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.118737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.119035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.119064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.119276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.119575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.119604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.120008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.120269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.120298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.120619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.120887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.120917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.121229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.121639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.121678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.122004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.122396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.122425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.122749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.123008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.123023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.123387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.123701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.123731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.124035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.124384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.124399] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.124766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.125077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.125107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.125449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.125829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.125843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.543 qpair failed and we were unable to recover it. 00:30:03.543 [2024-02-13 08:30:37.126141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.543 [2024-02-13 08:30:37.126531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.126560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.126928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.127162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.127205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.127595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.127912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.127942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.128190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.128501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.128531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.128958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.129281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.129310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.129637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.129968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.129997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.130250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.130633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.130673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.131054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.131465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.131493] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.131886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.132251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.132280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.132629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.132964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.132995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.133169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.133428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.133457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.133773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.134073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.134103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.134404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.134776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.134807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.135047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.135334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.135348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.135686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.136051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.136080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.136403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.136741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.136771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.137036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.137300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.137330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.137603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.137924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.137961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.138236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.138573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.138602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.138937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.139180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.139209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.139614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.139989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.140019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.140277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.140591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.140620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.140944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.141234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.141250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.141555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.141798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.141828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.142231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.142506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.142535] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.142926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.143204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.143234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.143492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.143808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.143838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.144285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.144618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.144654] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.144904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.145254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.145283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.145598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.145862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.145893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.146215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.146531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.146560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.146815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.147233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.147262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.147579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.147821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.147835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.148110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.148403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.148433] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.148716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.148977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.148992] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.149353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.149555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.149592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.149915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.150167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.150196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.150455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.150831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.150846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.151208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.151499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.151528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.152090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.152120] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.152372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.152707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.152737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.544 qpair failed and we were unable to recover it. 00:30:03.544 [2024-02-13 08:30:37.153064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.544 [2024-02-13 08:30:37.153314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.153343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.153656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.153764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.153778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.154113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.154453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.154482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.154743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.155045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.155059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.155443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.155756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.155771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.156073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.156383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.156412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.156565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.156840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.156871] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.157180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.157497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.157526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.157793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.158027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.158042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.158399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.158770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.158800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.159161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.159410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.159439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.159760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.159962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.159976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.160350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.160670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.160700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.161051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.161295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.161324] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.161629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.161883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.161912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.162303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.162601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.162630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.162898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.163259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.163288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.163535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.163899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.163913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.164252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.164488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.164502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.164780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.165145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.165174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.165572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.165933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.165963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.166229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.166562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.166591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.166870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.167113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.167143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.167382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.167616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.167631] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.167926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.168217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.168231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.168506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.168719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.168734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.169126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.169382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.169411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.169812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.170125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.170154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.170403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.170638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.170690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.171000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.171334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.171349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.171624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.172007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.172036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.172356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.172702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.172733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.173130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.173497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.173526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.173847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.174116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.174131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.174357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.174561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.174575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.174947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.175163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.175178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.175401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.175744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.175774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.176089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.176445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.176474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.176897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.177266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.177300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.177645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.177971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.178000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.178348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.178579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.178608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.178948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.179345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.179375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.545 qpair failed and we were unable to recover it. 00:30:03.545 [2024-02-13 08:30:37.179620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.545 [2024-02-13 08:30:37.179989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.180019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.180335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.180724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.180755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.181085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.181396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.181425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.181843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.182000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.182029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.182377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.182682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.182712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.183030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.183402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.183416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.183628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.183786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.183821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.184214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.184539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.184568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.184988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.185232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.185261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.185466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.185700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.185730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.186097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.186492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.186507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.186849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.187161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.187190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.187511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.187900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.187914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.188225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.188450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.188464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.188754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.189078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.189107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.189249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.189549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.189579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.189886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.190170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.190187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.190413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.190699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.190713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.190972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.191191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.191206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.191479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.191758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.191772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.192077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.192370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.192385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.192749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.193060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.193089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.193354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.193574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.193604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.193942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.194330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.194359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.194682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.195050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.195079] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.195464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.195830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.195859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.196069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.196339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.196368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.196685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.197083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.197112] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.197432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.197803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.197833] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.198085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.198417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.198446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.198759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.199086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.199100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.199466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.199772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.199802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.200130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.200455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.200484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.200848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.201085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.201100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.201295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.201520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.201534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.201827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.202040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.202054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.202342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.202677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.202692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.546 qpair failed and we were unable to recover it. 00:30:03.546 [2024-02-13 08:30:37.202942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.546 [2024-02-13 08:30:37.203325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.203354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.203698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.203949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.203978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.204309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.204483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.204524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.204825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.205141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.205155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.205398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.205741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.205755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.206016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.206249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.206278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.206543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.206775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.206790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.207071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.207358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.207388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.207653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.207970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.208000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.208367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.208738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.208753] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.209063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.209297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.209326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.209745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.210128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.210157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.210527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.210834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.210864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.211231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.211551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.211580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.211972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.212332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.212346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.212622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.212862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.212877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.213182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.213446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.213475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.213852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.214239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.547 [2024-02-13 08:30:37.214269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.547 qpair failed and we were unable to recover it. 00:30:03.547 [2024-02-13 08:30:37.214662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.214980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.214995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.215359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.215698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.215713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.216107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.216403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.216417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.216737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.217086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.217115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.217430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.217741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.217756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.218046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.218388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.218417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.218808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.219071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.219100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.219373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.219670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.219700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.219994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.220338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.220352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.220675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.220973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.221002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.221325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.221668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.221698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.815 qpair failed and we were unable to recover it. 00:30:03.815 [2024-02-13 08:30:37.222119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.815 [2024-02-13 08:30:37.222483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.222512] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.222820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.223130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.223159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.223550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.223800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.223815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.224126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.224432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.224447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.224728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.225051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.225080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.225452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.225819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.225862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.226143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.226424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.226438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.226709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.227060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.227089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.227481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.227847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.227877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.228272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.228701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.228731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.229103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.229508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.229538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.229856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.230198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.230227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.230620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.230942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.230972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.231354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.231722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.231751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.232146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.232528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.232557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.232875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.233163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.233194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.233563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.233865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.233895] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.234233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.234491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.234520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.234832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.235173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.235202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.235453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.235765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.235795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.236104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.236395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.236409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.236711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.236936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.236950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.816 qpair failed and we were unable to recover it. 00:30:03.816 [2024-02-13 08:30:37.237294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.816 [2024-02-13 08:30:37.237632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.237670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.237981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.238206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.238220] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.238558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.238790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.238805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.239169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.239570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.239599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.240027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.240342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.240371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.240787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.241028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.241057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.241424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.241799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.241829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.242234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.242572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.242601] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.242786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.243174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.243203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.243542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.243932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.243947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.244341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.244628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.244665] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.244986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.245366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.245380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.245727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.246042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.246071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.246408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.246776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.246805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.247123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.247431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.247460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.247760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.248098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.248127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.248388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.248642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.248687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.249060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.249392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.249421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.249745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.250044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.250074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.250323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.250635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.250674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.251072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.251407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.251437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.251755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.251984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.252013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.252337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.252593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.252623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.253008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.253023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.253386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.253533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.253562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.253934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.254253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.817 [2024-02-13 08:30:37.254268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.817 qpair failed and we were unable to recover it. 00:30:03.817 [2024-02-13 08:30:37.254634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.255022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.255051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.255368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.255674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.255704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.256099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.256463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.256492] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.256890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.257274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.257288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.257643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.257996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.258026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.258229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.258520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.258549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.258920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.259311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.259340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.259544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.259859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.259873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.260100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.260465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.260494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.260761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.261100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.261129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.261496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.261747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.261777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.262149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.262466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.262495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.262818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.263168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.263197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.263510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.263917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.263947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.264250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.264558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.264587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.264927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.265320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.265349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.265683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.266055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.266085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.266454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.266707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.266737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.267151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.267514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.267543] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.267770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.268083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.268112] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.268504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.268894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.268923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.269251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.269664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.269693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.270085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.270397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.270426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.270577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.270898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.270928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.271261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.271576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.271605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.271991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.272228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.818 [2024-02-13 08:30:37.272257] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.818 qpair failed and we were unable to recover it. 00:30:03.818 [2024-02-13 08:30:37.272683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.272999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.273028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.273444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.273857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.273886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.274290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.274696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.274726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.275032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.275326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.275341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.275690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.276031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.276060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.276402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.276713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.276743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.277115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.277454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.277482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.277871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.278174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.278208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.278601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.278933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.278963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.279385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.279635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.279672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.280080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.280479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.280509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.280887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.281279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.281309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.281560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.281877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.281907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.282260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.282547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.282561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.282904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.283153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.283182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.283495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.283863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.283893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.284274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.284588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.284618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.284957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.285202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.285236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.285557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.285943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.285973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.286335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.286653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.286684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.287014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.287370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.287400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.287722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.288040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.288069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.288378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.288751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.288781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.289039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.289314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.289328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.819 qpair failed and we were unable to recover it. 00:30:03.819 [2024-02-13 08:30:37.289618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.819 [2024-02-13 08:30:37.289960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.289975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.290284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.290599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.290628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.291006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.291340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.291369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.291798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.292165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.292199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.292534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.292910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.292940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.293264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.293614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.293644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.293997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.294363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.294392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.294706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.295077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.295107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.295409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.295769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.295784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.296015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.296396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.296425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.296796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.297112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.297141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.297481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.297692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.297721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.298119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.298476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.298505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.298834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.299158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.299192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.299415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.299808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.299838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.300155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.300558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.300586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.300981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.301299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.301314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.301672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.301864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.301878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.820 qpair failed and we were unable to recover it. 00:30:03.820 [2024-02-13 08:30:37.302258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.302500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.820 [2024-02-13 08:30:37.302529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.302915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.303283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.303312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.303658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.303969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.303997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.304268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.304681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.304711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.304967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.305171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.305186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.305507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.305843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.305858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.306082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.306465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.306479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.306818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.307086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.307115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.307440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.307756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.307786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.308118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.308404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.308432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.308824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.309229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.309258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.309627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.309932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.309946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.310311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.310676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.310706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.311090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.311331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.311360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.311685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.312049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.312063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.312429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.312755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.312785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.313105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.313428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.313457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.313776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.314025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.314064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.314402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.314689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.314718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.315086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.315365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.315380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.315617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.315908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.315924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.316223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.316472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.316501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.316848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.317130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.317145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.317428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.317736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.317751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.318048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.318387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.318402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.318697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.319011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.319041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.319250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.319499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.319528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.821 qpair failed and we were unable to recover it. 00:30:03.821 [2024-02-13 08:30:37.319906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.821 [2024-02-13 08:30:37.320240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.320269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.320664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.320974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.321003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.321325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.321714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.321744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.322080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.322399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.322436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.322757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.323154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.323183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.323503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.323849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.323879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.324198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.324513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.324541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.324795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.325052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.325081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.325392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.325728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.325743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.326045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.326314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.326328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.326690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.326983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.327012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.327435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.327753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.327782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.328030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.328402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.328416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.328631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.329002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.329017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.329227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.329568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.329597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.330027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.330441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.330455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.330687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.331022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.331036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.331383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.331599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.331628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.332030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.332331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.332345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.332666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.333034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.333063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.333377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.333677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.333913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.334131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.334146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.334488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.334887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.334917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.335226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.335512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.335526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.335864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.336096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.336110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.822 [2024-02-13 08:30:37.336403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.336717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.822 [2024-02-13 08:30:37.336748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.822 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.337056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.337366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.337381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.337730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.338038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.338067] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.338376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.338612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.338640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.338971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.339206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.339235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.339636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.339881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.339919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.340264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.340588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.340617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.340962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.341375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.341751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.341998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.342027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.342425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.342821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.342851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.343251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.343576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.343618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.344015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.344329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.344343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.344781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.345076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.345105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.345495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.345754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.345784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.346164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.346378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.346407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.346724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.347121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.347135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.347355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.347639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.347658] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.347877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.348176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.348190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.348476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.348702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.348716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.349069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.349287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.349328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.349587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.349900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.349930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.350245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.350567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.350596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.350816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.351185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.351199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.351509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.351829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.351858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.352349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.352736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.352765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.353136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.353386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.823 [2024-02-13 08:30:37.353400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.823 qpair failed and we were unable to recover it. 00:30:03.823 [2024-02-13 08:30:37.353680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.353960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.353975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.354148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.354445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.354474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.354779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.355088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.355117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.355424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.355786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.355816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.356132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.356362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.356376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.356709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.357020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.357049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.357440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.357805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.357835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.358153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.358458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.358472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.358711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.359079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.359108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.359421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.359790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.359820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.360215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.360516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.360545] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.361017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.361406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.361436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.361738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.362053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.362095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.362470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.362790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.362820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.363197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.363536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.363565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.363815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.364127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.364156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.364383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.364696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.364726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.365092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.365348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.365377] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.365684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.366076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.366105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.366428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.366726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.366741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.367018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.367358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.367397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.367766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.368108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.368137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.368539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.368846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.368875] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.824 qpair failed and we were unable to recover it. 00:30:03.824 [2024-02-13 08:30:37.369258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.824 [2024-02-13 08:30:37.369525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.369555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.369873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.370265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.370294] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.370668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.370975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.371004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.371392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.371637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.371675] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.371937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.372243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.372272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.372531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.372880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.372919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.373208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.373438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.373453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.373675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.373960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.373974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.374247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.374530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.374544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.374818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.375114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.375128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.375402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.375779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.375809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.376179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.376544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.376573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.376841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.377238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.377267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.377578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.377796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.377812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.378149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.378536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.378565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.378893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.379286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.379321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.379721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.380146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.380175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.380589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.380842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.380871] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.381176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.381464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.381478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.381821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.382188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.382218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.382466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.382748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.382773] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.383142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.383490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.383519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.383784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.384099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.384128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.384433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.384822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.384851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.385190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.385480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.385494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.385713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.386000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.386017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.825 qpair failed and we were unable to recover it. 00:30:03.825 [2024-02-13 08:30:37.386370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.825 [2024-02-13 08:30:37.386689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.386719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.386864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.387243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.387272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.387666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.387925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.387954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.388269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.388631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.388666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.388916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.389227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.389256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.389572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.389914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.389943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.390214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.390577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.390607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.391013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.391363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.391393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.391739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.392054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.392082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.392399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.392640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.392683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.393056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.393432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.393461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.393766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.394083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.394113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.394425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.394734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.394763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.395155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.395519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.395548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.395886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.396183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.396213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.396461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.396843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.396872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.397199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.397494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.397524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.397713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.398023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.398053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.398443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.398829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.398858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.399182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.399493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.826 [2024-02-13 08:30:37.399527] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.826 qpair failed and we were unable to recover it. 00:30:03.826 [2024-02-13 08:30:37.399953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.400272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.400301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.400527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.400888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.400917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.401218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.401547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.401576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.401896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.402193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.402208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.402547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.402917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.402947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.403337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.403604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.403633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.404043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.404353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.404381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.404687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.405050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.405092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.405515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.405819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.405848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.406170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.406469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.406497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.406899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.407286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.407316] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.407694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.408086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.408115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.408452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.408772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.408802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.409121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.409508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.409536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.409962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.410280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.410309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.410697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.410962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.410991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.411359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.411745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.411760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.411939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.412329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.412358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.412681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.413049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.413078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.413428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.413817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.413846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.414170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.414420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.414463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.414755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.415096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.415125] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.415456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.415817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.415848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.416224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.416538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.416567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.416965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.417361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.417391] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.417712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.418020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.418049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.827 qpair failed and we were unable to recover it. 00:30:03.827 [2024-02-13 08:30:37.418448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.827 [2024-02-13 08:30:37.418787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.418816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.419185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.419448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.419477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.419869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.420112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.420141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.420472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.420836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.420865] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.421197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.421521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.421550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.421796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.422188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.422217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.422530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.422890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.422919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.423315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.423705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.423734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.424046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.424447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.424476] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.424814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.425136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.425165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.425416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.425827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.425858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.426230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.426597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.426626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.427029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.427343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.427372] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.427679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.427978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.428007] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.428329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.428712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.428743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.429051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.429240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.429269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.429634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.429951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.429965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.430279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.430667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.430698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.431086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.431400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.431429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.431756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.432155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.432185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.432525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.432888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.432918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.433309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.433671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.433701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.434012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.434409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.434438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.434694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.434983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.435012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.435393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.435659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.435688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.828 [2024-02-13 08:30:37.436078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.436512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.828 [2024-02-13 08:30:37.436526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.828 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.436872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.437216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.437230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.437504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.437815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.437846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.438268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.438585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.438615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.439024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.439390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.439419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.439714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.440047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.440076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.440465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.440849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.440879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.441277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.441614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.441643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.441941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.442278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.442307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.442641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.442915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.442944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.443248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.443667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.443697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.444066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.444431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.444460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.444831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.445163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.445193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.445504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.445802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.445841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.446169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.446542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.446571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.446876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.447177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.447207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.447558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.447956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.447987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.448316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.448632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.448669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.449074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.449438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.449467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.449866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.450218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.450247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.450571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.450918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.450948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.451276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.451599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.451628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.452058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.452308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.452338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.452719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.453207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.453236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.453577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.453965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.453996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.829 qpair failed and we were unable to recover it. 00:30:03.829 [2024-02-13 08:30:37.454340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.454733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.829 [2024-02-13 08:30:37.454763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.455102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.455423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.455452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.455822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.456214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.456243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.456622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.456994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.457024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.457425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.457686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.457719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.458111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.458448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.458478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.458784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.459169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.459199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.459538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.459853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.459883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.460275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.460642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.460678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.460969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.461276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.461305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.461719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.462037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.462067] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.462462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.462857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.462888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.463281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.463664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.463678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.464048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.464428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.464457] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.464769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.465139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.465169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.465567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.465939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.465971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.466362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.466653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.466668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.467041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.467439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.467480] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.467833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.468202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.468232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.468536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.468902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.468931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.469161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.469485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.469515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.469907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.470252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.470282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.470672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.471002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.471050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.830 qpair failed and we were unable to recover it. 00:30:03.830 [2024-02-13 08:30:37.471457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.830 [2024-02-13 08:30:37.471776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.471791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.472161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.472502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.472532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.472836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.473223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.473252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.473667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.474060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.474089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.474479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.474785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.474815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.475155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.475557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.475587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.475959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.476349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.476379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.476770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.477132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.477161] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.477487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.477882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.477912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.478306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.478695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.478725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.479095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.479491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.479520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.479857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.480170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.480200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.480619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.481015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.481063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.481435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.481859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.482203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.482527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.482556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.482975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.483293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.483322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.483698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.484089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.484118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.484515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.484902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.484933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.485330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.485658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.485688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.486085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.486320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.486349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.486752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.487172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.487202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.487587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.487985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.488014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.831 qpair failed and we were unable to recover it. 00:30:03.831 [2024-02-13 08:30:37.488399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.831 [2024-02-13 08:30:37.488758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.488773] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.489011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.489351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.489366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.489767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.490160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.490189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.490583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.490968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.490999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.491397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.491716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.491731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.492093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.492371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.492401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.492794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.493122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.493151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:03.832 [2024-02-13 08:30:37.493562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.493929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.832 [2024-02-13 08:30:37.493944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:03.832 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.494234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.494542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.494557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.494858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.495244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.495261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.495619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.495993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.496008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.496401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.496696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.496726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.497119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.497503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.497518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.497879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.498229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.498258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.498554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.498919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.498950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.499341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.499668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.499698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.500069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.500455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.500484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.500857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.501226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.501256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.501665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.502052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.502082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.502434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.502819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.502854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.503184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.503578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.503607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.503954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.504358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.504387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.504707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.505099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.505129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.505525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.505918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.505933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.506298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.506599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.506629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.507012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.507406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.507436] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.507843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.508172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.508202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.100 qpair failed and we were unable to recover it. 00:30:04.100 [2024-02-13 08:30:37.508603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.508961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.100 [2024-02-13 08:30:37.508991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.509296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.509665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.509694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.510090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.510461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.510496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.510767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.511128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.511143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.511539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.511879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.511910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.512235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.512632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.512670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.513065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.513462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.513491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.513888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.514283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.514312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.514706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.515097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.515127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.515471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.515860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.515876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.516187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.516460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.516490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.516865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.517262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.517291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.517637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.518051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.518087] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.518400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.518727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.518743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.519039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.519405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.519435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.519831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.520109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.520138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.520458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.520786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.520801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.521168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.521539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.521554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.521851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.522197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.522226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.522624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.522981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.523012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.523334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.523670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.523701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.524136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.524452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.524481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.524854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.525154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.525183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.525613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.525963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.525994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.526404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.526795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.526811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.527567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.527597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.101 qpair failed and we were unable to recover it. 00:30:04.101 [2024-02-13 08:30:37.528041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.101 [2024-02-13 08:30:37.528451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.528480] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.528867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.529223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.529253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.529576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.529871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.529888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.530267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.530514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.530544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.530954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.531288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.531318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.531700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.531950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.531980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.532397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.532771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.532812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.533131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.533411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.533440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.533859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.534255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.534285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.534686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.535038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.535053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.535399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.535857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.535888] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.536288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.536589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.536619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.537052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.537306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.537335] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.537731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.538106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.538135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.538533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.538915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.538946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.539270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.539606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.539635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.540049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.540370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.540407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.540747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.541138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.541168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.541524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.541820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.541836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.542202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.542593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.542622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.542973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.543300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.543329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.543713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.544100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.544116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.544427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.544826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.544856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.545262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.545593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.545623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.546035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.546387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.546417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.546799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.547130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.547160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.102 qpair failed and we were unable to recover it. 00:30:04.102 [2024-02-13 08:30:37.547570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.102 [2024-02-13 08:30:37.547964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.547995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.548343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.548736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.548766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.549094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.549466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.549496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.549913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.550267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.550297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.550697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.551029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.551059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.551473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.551853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.551884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.552114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.552454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.552484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.552855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.553157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.553186] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.553610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.553933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.553964] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.554208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.554581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.554610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.554977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.555339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.555368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.555629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.555987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.556018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.556289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.556607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.556637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.556966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.557193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.557209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.557579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.557904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.557954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.558211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.558515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.558530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.558830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.559179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.559195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.559478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.559778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.559793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.560019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.560319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.560349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.560758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.561059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.561088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.561493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.561863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.561879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.562177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.562475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.562490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.562875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.563181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.563212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.563523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.563852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.563882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.564155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.564486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.564516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.103 qpair failed and we were unable to recover it. 00:30:04.103 [2024-02-13 08:30:37.564869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.565178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.103 [2024-02-13 08:30:37.565207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.565536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.565854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.565885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.566157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.566664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.566696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.567117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.567499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.567528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.567826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.568045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.568060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.568329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.568570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.568585] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.568941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.569311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.569326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.569602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.570055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.570086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.570495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.570819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.570835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.571228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.571388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.571417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.571680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.572047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.572062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.572374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.572748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.572778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.573192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.573602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.573632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.573984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.574239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.574254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.574572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.574877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.574892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.575238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.575529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.575544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.575850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.576192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.576207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.576510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.576698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.576713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.577109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.577479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.577508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.577881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.578183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.578197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.578473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.578772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.578787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.579026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.579310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.579325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.579623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.579995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.580010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.580247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.580554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.104 [2024-02-13 08:30:37.580569] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.104 qpair failed and we were unable to recover it. 00:30:04.104 [2024-02-13 08:30:37.580886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.581259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.581288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.581519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.581767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.581797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.582066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.582468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.582498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.582819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.583078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.583108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.583491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.583814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.583844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.584148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.584452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.584467] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.584681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.585070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.585085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.585378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.585700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.585730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.586100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.586393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.586422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.586730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.587000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.587029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.587431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.587778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.587809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.588149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.588473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.588502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.588859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.589243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.589273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.589889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.589919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.590295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.590621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.590661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.590948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.591250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.591280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.591600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.591935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.591966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.592354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.592748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.592778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.105 [2024-02-13 08:30:37.593105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.593360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.105 [2024-02-13 08:30:37.593390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.105 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.593717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.594007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.594021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.594247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.594602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.594632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.594950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.595262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.595292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.595680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.596056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.596086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.596484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.596959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.596974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.597361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.597754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.597783] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.598095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.598430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.598445] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.598745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.598981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.598995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.599240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.599614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.599644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.600052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.600425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.600454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.600829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.601159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.601189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.601518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.601932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.601963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.602275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.602693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.602723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.603113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.603513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.603543] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.603894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.604263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.604277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.604562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.604882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.604898] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.605244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.605543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.605558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.606822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.607229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.607249] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.607627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.607961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.607976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.608332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.608753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.608787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.609051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.609294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.609342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.609732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.610027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.610057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.610423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.610775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.610807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.106 [2024-02-13 08:30:37.611209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.611611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.106 [2024-02-13 08:30:37.611657] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.106 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.611930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.612249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.612264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.612632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.613097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.613137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.613597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.613887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.613918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.614245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.614666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.614696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.615083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.615348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.615378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.615771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.616147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.616176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.616514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.616784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.616814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.617217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.617590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.617620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.617987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.618304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.618334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.618748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.619096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.619137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.619540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.619942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.619973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.620350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.620745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.620776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.621119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.621517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.621546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.621934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.622285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.622315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.622655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.622982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.623012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.623417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.623794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.623826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.624153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.624552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.624582] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.624985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.625359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.625389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.625837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.626185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.626214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.626612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.626978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.626996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.627370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.627716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.627732] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.627985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.628325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.628355] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.628787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.629900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.629930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.630264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.630641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.630691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.107 qpair failed and we were unable to recover it. 00:30:04.107 [2024-02-13 08:30:37.630998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.631313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.107 [2024-02-13 08:30:37.631343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.631789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.632119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.632149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.632561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.632982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.632997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.633350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.633719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.633751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.634141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.634545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.634575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.634978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.635360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.635397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.635734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.636008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.636038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.636373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.636697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.636728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.637064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.637350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.637379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.637717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.638114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.638144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.638417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.638814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.638845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.639228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.639628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.639671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.639974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.640275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.640293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.640653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.640997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.641028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.641425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.641829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.641861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.642304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.642755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.642786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.643195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.643531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.643561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.643880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.644187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.644203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.645008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.645038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.645370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.645738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.645769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.646081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.646364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.646394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.646810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.647145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.647175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.647597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.647998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.648017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.648386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.648761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.648791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.649124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.649408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.649437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.649854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.650242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.650273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.108 qpair failed and we were unable to recover it. 00:30:04.108 [2024-02-13 08:30:37.650620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.650938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.108 [2024-02-13 08:30:37.650953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.651215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.651613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.651643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.651967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.652316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.652346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.652703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.653055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.653070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.653391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.653720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.653751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.654037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.654372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.654402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.654929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.655399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.655430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.655759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.656102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.656132] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.656489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.656893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.656924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.657260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.657672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.657705] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.658079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.658409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.658438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.658840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.659167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.659197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.659598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.660036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.660067] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.660399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.660812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.660842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.661104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.661519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.661549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.661897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.662207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.662237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.662623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.662922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.662953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.663366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.663770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.663801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.664083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.664335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.664350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.664687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.665012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.665028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.665323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.665684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.665699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.666019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.666322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.666337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.666727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.667084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.667114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.667479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.667729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.667746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.668138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.668466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.668496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.109 qpair failed and we were unable to recover it. 00:30:04.109 [2024-02-13 08:30:37.668922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.669257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.109 [2024-02-13 08:30:37.669287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.669699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.670029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.670059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.670478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.670891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.670921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.671331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.671597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.671627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.672047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.672479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.672509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.672882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.673284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.673314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.673700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.674085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.674115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.674508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.674912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.674943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.675207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.675568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.675598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.675992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.676310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.676341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.676679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.677073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.677102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.677504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.677810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.677841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.678138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.678422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.678451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.678796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.679142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.679171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.679608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.679978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.680008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.680323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.680704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.680735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.681151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.681428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.681458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.681785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.682071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.682102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.682510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.682913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.682945] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.683282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.683590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.683605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.683914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.684328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.684358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.110 qpair failed and we were unable to recover it. 00:30:04.110 [2024-02-13 08:30:37.684696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.685080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.110 [2024-02-13 08:30:37.685109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.685570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.685923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.685955] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.686317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.686660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.686691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.687091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.687441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.687471] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.687869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.688280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.688310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.688667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.689031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.689047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.689463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.689832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.689862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.690275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.690679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.690710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.690992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.691217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.691232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.691612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.691955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.691986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.692299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.692662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.692678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.692946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.693275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.693304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.693639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.693935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.693966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.694423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.694835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.694891] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.695207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.695515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.695544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.695895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.696200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.696230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.696555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.696945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.696976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.697398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.697745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.697775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.698055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.698386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.698415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.698843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.699130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.699160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.699576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.699928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.699959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.700295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.700608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.700638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.701067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.701346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.701375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.701812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.702217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.702247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.702638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.703061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.111 [2024-02-13 08:30:37.703092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.111 qpair failed and we were unable to recover it. 00:30:04.111 [2024-02-13 08:30:37.703365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.703639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.703680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.704022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.704358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.704388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.704765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.705140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.705171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.705610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.705942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.705958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.706219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.706610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.706639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.707011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.707407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.707437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.707861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.708219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.708234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.708557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.708949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.708980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.709374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.709726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.709756] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.710022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.710280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.710296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.710699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.710975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.711005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.711393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.711721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.711752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.712050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.712338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.712353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.712763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.713075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.713105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.713395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.713721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.713751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.714161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.714489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.714519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.714765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.715099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.715129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.715395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.715723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.715739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.716055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.716384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.716414] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.716793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.717130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.717159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.717541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.717808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.717838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.718254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.718525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.718555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.718897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.719292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.719322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.719664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.720099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.720129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.720469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.720829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.720861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.112 qpair failed and we were unable to recover it. 00:30:04.112 [2024-02-13 08:30:37.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.721537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.112 [2024-02-13 08:30:37.721567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.721818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.722150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.722179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.722522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.722916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.722948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.723285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.723722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.723752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.724137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.724634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.724681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.724973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.725375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.725405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.725767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.726146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.726176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.726537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.726959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.726991] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.727407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.727835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.727865] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.728268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.728614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.728644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.729004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.729335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.729365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.729711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.730042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.730072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.730457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.730844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.730876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.731224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.731622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.731661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.731960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.732397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.732432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.732856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.733181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.733211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.733671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.733956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.733986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.734262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.734729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.734760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.735183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.735522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.735552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.735856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.736131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.736160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.736522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.736848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.736879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.737168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.737548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.737578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.737985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.738310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.738343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.738754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.739159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.739189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.739550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.739918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.739958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.113 [2024-02-13 08:30:37.740249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.740632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.113 [2024-02-13 08:30:37.740672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.113 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.740948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.741335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.741351] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.741721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.742114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.742144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.742519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.742852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.742868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.743175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.743530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.743560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.743887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.744268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.744298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.744712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.745046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.745076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.745439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.745823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.745854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.746191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.746610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.746639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.746946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.747309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.747329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.747719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.748020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.748034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.748295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.748672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.748688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.749016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.749343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.749373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.749769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.750060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.750089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.750486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.750893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.750925] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.751296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.751629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.751670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.751963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.752369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.752398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.752732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.753035] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.753373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.753709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.753741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.754083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.754346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.754376] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.754797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.755180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.755210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.755629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.755958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.755989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.756328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.756735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.756765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.757174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.757531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.114 [2024-02-13 08:30:37.757561] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.114 qpair failed and we were unable to recover it. 00:30:04.114 [2024-02-13 08:30:37.757892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.758220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.758263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.758669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.759007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.759038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.759323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.759741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.759772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.760044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.760433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.760462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.760851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.761126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.761156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.761534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.761879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.761910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.762174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.762544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.762574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.762961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.763391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.763421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.763823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.764186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.764216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.764533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.764959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.764989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.765370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.765728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.765759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.766050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.766394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.766424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.766842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.767245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.767276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.767933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.767963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.768298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.768627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.768667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.769014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.769329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.769358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.769710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.770120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.770150] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.770600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.770973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.771004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.771351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.771754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.771785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.772117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.772388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.772432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.772757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.773085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.773114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.773481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.773811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.773842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.774110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.774435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.774450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.115 qpair failed and we were unable to recover it. 00:30:04.115 [2024-02-13 08:30:37.774786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.775092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.115 [2024-02-13 08:30:37.775107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.775368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.775632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.775653] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.776012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.776307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.776322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.776654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.776969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.777012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.777347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.777619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.777662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.777954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.778251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.116 [2024-02-13 08:30:37.778266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.116 qpair failed and we were unable to recover it. 00:30:04.116 [2024-02-13 08:30:37.778701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.779024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.779039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.779289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.779641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.779666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.780002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.780331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.780347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.780762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.781072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.781087] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.781329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.781670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.781702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.781960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.782317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.782347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.782701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.783017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.783046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.783430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.783837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.783868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.784203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.784637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.784679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.785064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.785338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.785368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.384 qpair failed and we were unable to recover it. 00:30:04.384 [2024-02-13 08:30:37.785642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.384 [2024-02-13 08:30:37.785901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.785932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.786375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.786722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.786753] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.787150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.787421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.787450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.787805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.788136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.788166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.788513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.788917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.788948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.789275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.789764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.789793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.790224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.790630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.790674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.790972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.791305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.791335] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.791740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.792077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.792107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.792455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.792897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.792929] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.793287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.793695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.793726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.794069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.794416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.794431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.794806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.795094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.795124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.795527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.795927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.795958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.796347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.796724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.796740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.797124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.797402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.797432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.797705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.798070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.798100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.798379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.798688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.798704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.799051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.799390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.799406] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.799733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.800027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.800057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.800502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.800851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.800882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.801216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.801620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.801663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.801951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.802238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.802268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.385 [2024-02-13 08:30:37.802540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.802901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.385 [2024-02-13 08:30:37.802932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.385 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.803270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.803663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.803694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.804088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.804436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.804451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.804806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.805098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.805113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.805372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.805685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.805701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.805939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.806239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.806254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.806501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.806872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.806903] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.807245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.807638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.807662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.807941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.808329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.808344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.808729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.809197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.809227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.809579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.809926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.809957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.810237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.810577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.810608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.811114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.811459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.811488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.811937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.812207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.812237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.812565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.812927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.812958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.813289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.813594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.813609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.813977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.814267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.814282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.814705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.815014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.815030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.815379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.815639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.815679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.816109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.816372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.816387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.818164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.818183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.818679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.818935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.818950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.819192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.819447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.819462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.819834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.820141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.820157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.386 qpair failed and we were unable to recover it. 00:30:04.386 [2024-02-13 08:30:37.820532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.820935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.386 [2024-02-13 08:30:37.820966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.821299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.821669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.821685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.821938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.822258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.822273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.822578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.822914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.822930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.823290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.823644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.823702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.823991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.824350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.824380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.824816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.825142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.825172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.825596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.826004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.826035] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.826298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.826695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.826727] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.827033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.827390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.827419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.827768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.828132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.828162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.828641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.828989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.829019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.829372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.829641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.829665] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.830038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.830430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.830459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.830859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.831207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.831237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.831584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.831945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.831977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.832317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.832705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.832736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.832984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.833373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.833402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.833843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.834237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.834266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.834603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.835916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.835949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.836235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.836552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.836583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.837044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.837316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.837346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.837610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.837967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.837999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.838388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.838708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.838724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.838975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.839220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.839235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.839709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.840038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.387 [2024-02-13 08:30:37.840068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.387 qpair failed and we were unable to recover it. 00:30:04.387 [2024-02-13 08:30:37.840358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.840579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.840595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.840951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.841197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.841212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.841537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.841865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.841880] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.842214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.842558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.842587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.842995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.843343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.843379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.843768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.844026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.844056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.844338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.844641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.844668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.844989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.845265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.845295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.845619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.846043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.846074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.846523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.846895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.846926] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.847218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.847582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.847611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.847981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.848319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.848348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.848757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.848981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.849011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.849291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.849692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.849723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.850016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.850356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.850391] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.850775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.851066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.851095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.851370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.851700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.851716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.851985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.852308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.852338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.852745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.853019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.853050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.853419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.853767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.853782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.854034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.854337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.854352] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.854739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.855002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.855018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.855322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.855721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.855737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.855968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.856327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.856342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.388 qpair failed and we were unable to recover it. 00:30:04.388 [2024-02-13 08:30:37.856703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.388 [2024-02-13 08:30:37.857036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.857054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.857358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.857660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.857677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.857995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.858251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.858266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.858585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.858936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.858952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.859265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.859638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.859660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.859996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.860363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.860378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.860772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.861073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.861089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.861327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.861688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.861703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.862032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.862384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.862398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.862708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.863064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.863080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.863448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.863770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.863789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.864145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.864542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.864557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.864861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.865104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.865119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.865476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.865851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.865867] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.866250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.866622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.866638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.867016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.867255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.867270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.867666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.867923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.867939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.868242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.868607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.868622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.868987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.869286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.869302] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.869548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.869849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.869864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.389 qpair failed and we were unable to recover it. 00:30:04.389 [2024-02-13 08:30:37.870221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.870514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.389 [2024-02-13 08:30:37.870530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.870837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.871215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.871230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.871617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.871944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.871960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.872275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.872667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.872684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.872946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.873267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.873282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.873530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.873884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.873899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.874252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.874558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.874574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.874877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.875129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.875144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.875454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.875749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.875764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.876149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.876486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.876503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.876808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.877043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.877058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.877323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.877569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.877584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.877985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.878312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.878328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.878712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.878960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.878976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.879309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.879614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.879630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.879990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.880377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.880392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.880715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.881043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.881058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.881293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.881597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.881612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.881860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.882220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.882235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.882633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.882997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.883013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.883368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.883722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.883737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.390 [2024-02-13 08:30:37.884128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.884508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.390 [2024-02-13 08:30:37.884523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.390 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.884928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.885228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.885244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.885551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.885880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.885895] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.886251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.886640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.886660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.887031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.887429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.887444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.887808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.888192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.888222] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.888549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.888899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.888915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.889359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.889755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.889786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.890189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.890497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.890526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.890934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.891339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.891369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.891794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.892155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.892185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.892579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.892983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.893015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.893419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.893758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.893790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.894196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.894618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.894658] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.895081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.895533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.895562] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.895902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.896263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.896279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.896643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.897032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.897062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.897405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.897763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.897794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.898207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.898573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.898603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.898990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.899325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.899354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.899764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.900048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.900078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.900411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.900836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.900866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.901198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.901613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.901642] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.902134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.902423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.902453] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.902830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.903265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.903296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.903633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.904041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.904072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.391 qpair failed and we were unable to recover it. 00:30:04.391 [2024-02-13 08:30:37.904501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.391 [2024-02-13 08:30:37.904771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.904802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.905161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.905545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.905574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.905993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.906413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.906443] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.906919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.907200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.907230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.907684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.907970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.908001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.908434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.908760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.908791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.909274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.909668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.909698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.910031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.910403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.910434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.910826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.911209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.911239] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.911640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.911939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.911969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.912326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.912771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.912802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.913144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.913578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.913607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.913975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.914318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.914349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.914629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.915017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.915048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.915375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.915783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.915815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.916144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.916405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.916421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.916729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.917123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.917153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.917611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.917955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.917985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.918394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.918745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.918775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.919088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.919518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.919548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.919904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.920185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.920215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.920718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.921053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.392 [2024-02-13 08:30:37.921083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.392 qpair failed and we were unable to recover it. 00:30:04.392 [2024-02-13 08:30:37.921348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.921681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.921712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.922126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.922552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.922594] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.922991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.923298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.923328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.923720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.924055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.924085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.924420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.924745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.924776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.925112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.925386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.925416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.925820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.926150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.926180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.926627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.926932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.926963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.927374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.927754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.927785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.928207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.928556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.928587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.929006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.929362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.929392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.929681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.930016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.930046] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.930435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.930824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.930856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.931224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.931590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.931619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.932053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.932485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.932515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.932936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.933204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.933235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.933594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.934017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.934048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.934407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.934719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.934750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.935094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.935485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.935515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.935976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.936307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.936337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.936674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.937008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.937037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.937449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.937837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.937868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.938163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.938522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.938552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.393 [2024-02-13 08:30:37.938966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.939387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.393 [2024-02-13 08:30:37.939417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.393 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.939772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.940122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.940152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.940548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.940878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.940894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.941212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.941594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.941624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.942049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.942443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.942472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.942879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.943320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.943350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.943748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.944145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.944175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.944533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.944873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.944904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.945331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.945779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.945810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.946216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.946641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.946683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.946986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.947438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.947468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.947882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.948293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.948323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.948745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.949100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.949130] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.949559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.949959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.949990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.950407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.950732] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.950982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.951338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.951354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.951766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.952050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.952080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.952370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.952791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.952822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.953160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.953577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.953607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.954082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.954519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.954549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.954823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.955213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.955243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.955628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.956031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.956062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.956484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.956872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.956902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.957238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.957643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.957684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.958113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.958449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.958479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.394 qpair failed and we were unable to recover it. 00:30:04.394 [2024-02-13 08:30:37.958919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.394 [2024-02-13 08:30:37.959284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.959315] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.959721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.960074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.960105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.960398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.960791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.960807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.961121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.961545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.961575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.961910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.962346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.962381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.962773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.963240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.963270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.963617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.964023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.964038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.964354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.964689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.964720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.965075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.965382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.965412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.965802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.966080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.966111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.966387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.966703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.966741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.966993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.967351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.967381] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.967707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.968107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.968137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.968531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.968884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.968900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.969313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.969700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.969719] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.969976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.970364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.970394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.970668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.971027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.971056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.971399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.971822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.971853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.972170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.972442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.972472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.972807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.973069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.973098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.973504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.973916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.973946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.395 qpair failed and we were unable to recover it. 00:30:04.395 [2024-02-13 08:30:37.974335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.395 [2024-02-13 08:30:37.974726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.974742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.975104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.975410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.975425] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.975803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.976226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.976256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.976671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.977004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.977039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.977384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.977782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.977813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.978152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.978604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.978634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.979040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.979461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.979491] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.979879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.980260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.980291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.980779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.981115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.981145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.981494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.981886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.981902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.982208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.982605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.982637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.982928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.983244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.983274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.983667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.983960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.983989] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.984328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.984670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.984706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.985052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.985488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.985518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.985898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.986181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.986211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.986626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.987056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.987086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.987382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.987763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.987793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.988115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.988537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.988568] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.988921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.989206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.989236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.989662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.990064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.990094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.990511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.990855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.990871] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.991191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.991549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.991566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.991942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.992222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.992252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.992659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.992989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.993019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.396 [2024-02-13 08:30:37.993359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.993780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.396 [2024-02-13 08:30:37.993796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.396 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.994098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.994392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.994422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.994862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.995280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.995310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.995695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.996046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.996075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.996419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.996804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.996836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.997189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.997413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.997428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.997725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.998080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.998095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.998522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.998922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.998953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.999235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.999554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:37.999584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:37.999962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.000241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.000272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.000606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.000935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.000966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.001306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.001759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.001790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.002066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.002456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.002485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.002802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.003155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.003184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.003548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.004012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.004043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.004317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.004731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.004762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.005155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.005594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.005610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.005853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.006164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.006179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.006621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.007011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.007042] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.007429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.007773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.007804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.008139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.008491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.008520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.397 [2024-02-13 08:30:38.008849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.009183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.397 [2024-02-13 08:30:38.009214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.397 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.009630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.009934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.009965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.010350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.010694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.010726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.011068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.011417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.011447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.011893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.012277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.012306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.012719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.013057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.013088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.013521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.013928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.013959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.014397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.014747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.014778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.015114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.015477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.015508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.015905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.016191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.016206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.016599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.016996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.017026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.017371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.017796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.017827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.018114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.018382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.018412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.018821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.019093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.019123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.019539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.019956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.019987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.020361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.020676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.020706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.021045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.021432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.021463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.021878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.022164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.022193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.022637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.022994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.023024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.023303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.023693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.023725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.024003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.024262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.024293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.024657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.024928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.024958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.025362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.025632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.025681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.026090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.026368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.026397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.026727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.026983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.026999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.027391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.027792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.398 [2024-02-13 08:30:38.027824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.398 qpair failed and we were unable to recover it. 00:30:04.398 [2024-02-13 08:30:38.028200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.028670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.028701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.029034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.029370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.029400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.029767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.030091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.030121] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.030505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.030917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.030947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.031223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.031533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.031548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.031941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.032248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.032264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.032668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.032968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.032999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.033340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.033597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.033627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.034091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.034493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.034523] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.034796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.035045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.035060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.035425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.035812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.035829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.036209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.036616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.036655] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.036940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.037268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.037298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.037666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.038004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.038035] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.038352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.038718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.038748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.039079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.039428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.039458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.039776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.040094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.040124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.040574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.040926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.040957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.041222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.041589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.041619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.042051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.042379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.042410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.042834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.043163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.043194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.043510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.043857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.043887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.044162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.044622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.044663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-02-13 08:30:38.045009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.045266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.399 [2024-02-13 08:30:38.045296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.045626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.045945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.045975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.046266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.046666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.046697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.047040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.047314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.047344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.047740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.048074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.048104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.048409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.048825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.048857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.049181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.049496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.049527] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.049871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.050214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.050244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.050668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.050981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.051011] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.051274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.051595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.051625] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.051976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.052285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.052314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.052670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.053042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.053073] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.053401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.053668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.053698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.054062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.054537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.054581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.054954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.055215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.055245] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.055610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.056009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.056040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.056316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.056685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.056716] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.057154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.057589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.057604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.057956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.058259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.058274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.058748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.059008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.059024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.059349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.059673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.059689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-02-13 08:30:38.060071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.060334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.400 [2024-02-13 08:30:38.060349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.401 qpair failed and we were unable to recover it. 00:30:04.401 [2024-02-13 08:30:38.060655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.061017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.061032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.401 qpair failed and we were unable to recover it. 00:30:04.401 [2024-02-13 08:30:38.061343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.061680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.061696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.401 qpair failed and we were unable to recover it. 00:30:04.401 [2024-02-13 08:30:38.062017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.062372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.062401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.401 qpair failed and we were unable to recover it. 00:30:04.401 [2024-02-13 08:30:38.062794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.063121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.401 [2024-02-13 08:30:38.063137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.401 qpair failed and we were unable to recover it. 00:30:04.669 [2024-02-13 08:30:38.063441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.063853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.063869] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.669 qpair failed and we were unable to recover it. 00:30:04.669 [2024-02-13 08:30:38.064108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.064483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.064498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.669 qpair failed and we were unable to recover it. 00:30:04.669 [2024-02-13 08:30:38.064897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.065194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.065209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.669 qpair failed and we were unable to recover it. 00:30:04.669 [2024-02-13 08:30:38.065645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.669 [2024-02-13 08:30:38.065979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.066021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.066386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.066767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.066798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.067132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.067485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.067515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.067917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.068277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.068307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.068585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.068986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.069002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.069237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.069576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.069606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.070011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.070347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.070388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.070755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.071045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.071075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.071539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.071874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.071904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.072170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.072524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.072539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.072794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.073150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.073169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.073504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.073819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.073834] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.074140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.074563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.074593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.074999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.075309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.075325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.075629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.076116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.076147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.076557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.076968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.076998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.077318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.077748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.077779] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.078136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.078398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.078413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.078769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.079025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.079040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.079297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.079621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.079637] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.079933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.080316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.080351] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.080678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.080920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.080936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.081251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.081625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.081641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.081954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.082196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.082211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.082527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.082816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.082832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.083136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.083374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.083389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.670 [2024-02-13 08:30:38.083615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.083941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.670 [2024-02-13 08:30:38.083958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.670 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.084261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.084571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.084587] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.084933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.085224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.085254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.085636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.085979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.086009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.086368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.086664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.086683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.086950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.087296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.087325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.087739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.088069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.088084] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.088398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.088785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.088816] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.089114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.089366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.089395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.089871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.090198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.090213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.090522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.090850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.090866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.091223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.091523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.091538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.091849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.092093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.092108] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.092412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.092709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.092726] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.093046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.093454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.093489] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.093770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.094042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.094086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.094391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.094770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.094800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.095084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.095487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.095516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.095867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.096258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.096643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.096945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.096960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.097282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.097685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.097717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.097991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.098332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.098361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.098680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.099092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.099122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.099494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.099824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.099854] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.100134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.100367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.100382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.100748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.100980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.100996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.101416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.101666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.101681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.102096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.102495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.102524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.671 qpair failed and we were unable to recover it. 00:30:04.671 [2024-02-13 08:30:38.102924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.671 [2024-02-13 08:30:38.103255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.103284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.103616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.103915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.103931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.104189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.104471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.104487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.104790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.105174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.105189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.105563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.105889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.105905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.106240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.106574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.106604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.106981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.107360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.107375] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.107764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.108159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.108174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.108539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.108821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.108837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.109123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.109494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.109509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.109865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.110196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.110226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.110582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.110891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.110922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.111279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.111693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.111724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.112068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.112314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.112329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.112640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.112937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.112952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.113269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.113632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.113654] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.114039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.114459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.114474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.114838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.115198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.115228] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.115601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.116021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.116052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.116484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.116925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.116954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.117299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.117603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.117632] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.117928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.118327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.118356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.118744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.119006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.119036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.119427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.119843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.119873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.120208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.120526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.120556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.120974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.121364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.121393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.121803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.122136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.122166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.672 [2024-02-13 08:30:38.122581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.122960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.672 [2024-02-13 08:30:38.122975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.672 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.123353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.123604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.123633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.124053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.124445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.124461] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.124752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.125075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.125104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.125507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.125893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.125923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.126268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.126673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.126703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.127032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.127475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.127505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.127911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.128174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.128204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.128522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.128911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.128942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.129353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.129772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.129803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.130164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.130563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.130593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.130969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.131376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.131405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.131744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.132158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.132187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.132613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.133032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.133063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.133480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.133861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.133892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.134310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.134711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.134742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.135154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.135554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.135584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.135991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.136400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.136429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.136829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.137235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.137264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.137675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.138081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.138111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.138551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.138905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.138937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.139346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.139750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.139781] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.140189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.140570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.140600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.141030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.141453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.141482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.141901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.142246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.142275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.142676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.143077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.143106] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.143489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.143800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.143831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.144239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.144542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.144574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.673 qpair failed and we were unable to recover it. 00:30:04.673 [2024-02-13 08:30:38.144984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.673 [2024-02-13 08:30:38.145297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.145327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.145719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.146121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.146152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.146569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.146972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.146988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.147361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.147749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.147765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.148008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.148367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.148397] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.148758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.149087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.149118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.149462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.149864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.149894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.150269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.150671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.150703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.151046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.151382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.151413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.151829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.152176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.152205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.152630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.153002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.153034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.153345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.153644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.153666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.154001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.154433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.154462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.154871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.155204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.155234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.155565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.156030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.156061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.156507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.156931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.156961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.157321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.157663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.157694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.158047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.158431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.158446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.158784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.159199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.159229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.159628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.160009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.160040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.160468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.160799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.160832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.161168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.161584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.161613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.674 qpair failed and we were unable to recover it. 00:30:04.674 [2024-02-13 08:30:38.161964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.674 [2024-02-13 08:30:38.162333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.162364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.162772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.163158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.163188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.163603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.163957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.163988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.164286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.164677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.164708] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.165048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.165318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.165347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.165676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.166084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.166099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.166448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.166777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.166808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.167052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.167480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.167509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.167840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.168234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.168263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.168576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.168963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.168994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.169333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.169740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.169771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.170106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.170497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.170526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.170899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.171312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.171341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.171737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.172148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.172177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.172610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.173048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.173078] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.173487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.173919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.173949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.174290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.174710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.174740] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.175146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.175539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.175554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.175938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.176274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.176304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.176711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.176998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.177028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.177435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.177824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.177855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.178259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.178674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.178704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.179091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.179360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.179389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.179743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.180152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.180182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.180602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.181023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.181054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.181488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.181919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.181950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.182312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.182721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.182751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.183138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.183484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.675 [2024-02-13 08:30:38.183514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.675 qpair failed and we were unable to recover it. 00:30:04.675 [2024-02-13 08:30:38.183909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.184322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.184351] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.184734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.185145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.185174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.185584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.185996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.186027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.186361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.186671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.186701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.187120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.187504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.187534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.187948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.188269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.188298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.188709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.189117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.189147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.189487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.189914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.189945] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.190234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.190632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.190671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.190947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.191299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.191328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.191721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.192069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.192098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.192383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.192786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.192817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.193133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.193546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.193581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.193986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.194320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.194350] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.194758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.195167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.195182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.195483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.195862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.195894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.196219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.196570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.196599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.196873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.197275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.197305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.197631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.198040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.198055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.198421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.198771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.198802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.199210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.199609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.199638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.200058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.200466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.200496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.200859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.201206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.201241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.201556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.201975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.202006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.202336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.202676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.202706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.203046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.203375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.203405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.203753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.204086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.204115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.676 qpair failed and we were unable to recover it. 00:30:04.676 [2024-02-13 08:30:38.204498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.676 [2024-02-13 08:30:38.204900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.204931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.205335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.205713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.205744] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.206086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.206485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.206515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.206927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.207260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.207289] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.207692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.208103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.208132] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.208512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.208896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.208933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.209344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.209680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.209711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.210117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.210523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.210552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.210959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.211361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.211391] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.211665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.211994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.212040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.212337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.212694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.212709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.213037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.213360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.213389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.213798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.214161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.214191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.214601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.215031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.215062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.215415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.215821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.215852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.216262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.216687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.216722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.217147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.217569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.217598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.218010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.218417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.218432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.218728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.219114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.219143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.219499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.219831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.219862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.220195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.220582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.220611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.221007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.221330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.221360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.221763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.222147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.222178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.222525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.222923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.222954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.223361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.223765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.223795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.224268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.224675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.224706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.225097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.225499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.225529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.225940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.226349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.677 [2024-02-13 08:30:38.226379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.677 qpair failed and we were unable to recover it. 00:30:04.677 [2024-02-13 08:30:38.226789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.227171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.227200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.227459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.227868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.227899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.228310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.228718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.228749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.229160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.229538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.229567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.229952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.230262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.230291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.230675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.230929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.230959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.231368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.231776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.231807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.232137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.232540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.232555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.232917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.233321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.233358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.233738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.234114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.234144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.234459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.234809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.234840] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.235248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.235632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.235672] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.236087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.236488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.236517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.236870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.237213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.237243] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.237587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.237911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.237942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.238325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.238711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.238742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.239156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.239486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.239515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.239929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.240296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.240325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.240687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.241033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.241063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.241460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.241862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.241894] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.242161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.242541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.242570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.242979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.243306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.243335] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.243746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.244082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.244111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.244513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.244897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.244927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.245264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.245581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.245597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.245945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.246326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.246356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.246690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.247002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.247032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.678 [2024-02-13 08:30:38.247420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.247824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.678 [2024-02-13 08:30:38.247855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.678 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.248204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.248539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.248555] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.248943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.249291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.249321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.249654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.250041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.250459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.250789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.250819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.251239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.251643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.251681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.252067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.252485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.252514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.252926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.253303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.253333] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.253667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.254049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.254080] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.254469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.254789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.254819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.255146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.255462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.255492] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.255841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.256245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.256274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.256685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.257013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.257043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.257374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.257784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.257815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.258148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.258554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.258584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.258928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.259332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.259362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.259708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.260108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.260138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.260619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.261033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.261063] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.261474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.261885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.261915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.262304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.262705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.262735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.263050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.263380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.263410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.263842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.264267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.264296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.264733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.265161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.265190] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.265611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.266022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.679 [2024-02-13 08:30:38.266052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.679 qpair failed and we were unable to recover it. 00:30:04.679 [2024-02-13 08:30:38.266390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.266788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.266819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.267233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.267635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.267674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.268084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.268517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.268547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.268892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.269292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.269322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.269728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.270132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.270161] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.270437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.270791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.270806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.271163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.271569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.271598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.271973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.272378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.272408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.272800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.273111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.273141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.273528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.273928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.273958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.274366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.274688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.274718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.275132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.275511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.275540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.275948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.276355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.276384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.276790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.277198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.277227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.277613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.278028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.278059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.278391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.278704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.278735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.279151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.279472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.279487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.279960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.280343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.280373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.280757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.281174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.281203] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.281620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.281935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.281965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.282307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.282716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.282746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.283024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.283425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.283454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.283864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.284250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.284280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.284688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.285032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.285062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.285463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.285845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.285875] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.286286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.286692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.286722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.287125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.287529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.287558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.680 [2024-02-13 08:30:38.287967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.288220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.680 [2024-02-13 08:30:38.288250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.680 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.288676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.289010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.289040] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.289427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.289829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.289860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.290186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.290567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.290597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.290875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.291183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.291212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.291607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.292023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.292053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.292370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.292685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.292722] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.293129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.293520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.293550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.293968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.294293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.294323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.294738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.295051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.295081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.295493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.295847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.295878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.296265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.296677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.296709] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.297026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.297430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.297459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.297841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.298143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.298172] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.298583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.298964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.298996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.299404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.299809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.299825] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.300202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.300530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.300560] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.300972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.301302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.301331] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.301669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.302074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.302104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.302488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.302869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.302900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.303317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.303710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.303741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.304074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.304393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.304408] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.304816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.305210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.305241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.305672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.306093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.306123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.306576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.306940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.306971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.307286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.307701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.307731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.308073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.308433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.308462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.681 [2024-02-13 08:30:38.308847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.309206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.681 [2024-02-13 08:30:38.309235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.681 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.309645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.310009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.310038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.310448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.310700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.310730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.311136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.311408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.311437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.311845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.312185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.312214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.312553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.312891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.312922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.313328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.313711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.313741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.314081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.314481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.314510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.314862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.315145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.315175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.315562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.315854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.315885] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.316227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.316631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.316673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.317086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.317512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.317542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.317966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.318296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.318311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.318621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.319012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.319048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.319464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.319842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.319858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.320241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.320585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.320615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.320996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.321307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.321322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.321672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.322066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.322095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.322448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.322846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.322876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.323189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.323529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.323544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.323905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.324219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.324234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.324585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.324970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.325000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.325249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.325721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.325752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.326093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.326498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.326533] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.326923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.327327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.327357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.327745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.328150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.328179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.328591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.328874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.328905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.329299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.329703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.329734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.682 qpair failed and we were unable to recover it. 00:30:04.682 [2024-02-13 08:30:38.330063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.682 [2024-02-13 08:30:38.330469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.330499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.330912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.331180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.331210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.331472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.331798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.331828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.332315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.332619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.332658] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.332989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.333380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.333409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.333721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.334031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.334065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.334461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.334889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.334920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.335238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.335550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.335579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.335932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.336310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.336338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.336747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.337147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.337176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.337588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.338002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.338033] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.338387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.338676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.338707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.339057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.339460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.339490] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.339905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.340315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.340344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.340789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.341133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.341163] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.341609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.341980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.342010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.342422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.342784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.342799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.343108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.343483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.343499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.343901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.344315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.344344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.344672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.345063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.345093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.345505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.345813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.345829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.346080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.346436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.346451] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.346831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.347128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.347158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.683 [2024-02-13 08:30:38.347569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.347886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.683 [2024-02-13 08:30:38.347902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.683 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.348291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.348595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.348610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.348988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.349287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.349303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.349556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.349776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.349792] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.350032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.350426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.350441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.350774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.351111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.351141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.351545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.351828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.351844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.352130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.352509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.352538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.352933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.353273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.353303] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.353703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.354106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.354135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.354548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.354970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.355001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.355357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.355739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.355769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.356087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.356424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.356454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.356789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.357170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.357201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.357591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.357996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.358026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.358416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.358763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.358794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.359194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.359598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.359628] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.359963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.360376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.360390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.360762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.361139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.361169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.361561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.361908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.361939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.362338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.362735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.951 [2024-02-13 08:30:38.362751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.951 qpair failed and we were unable to recover it. 00:30:04.951 [2024-02-13 08:30:38.363107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.363454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.363483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.363816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.364219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.364248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.364642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.364992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.365022] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.365434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.365767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.365797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.366136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.366472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.366501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.366865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.367207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.367237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.367625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.367946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.367976] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.368306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.368728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.368759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.369096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.369499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.369529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.369913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.370272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.370301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.370635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.371044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.371074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.371485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.371811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.371827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.372210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.372590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.372619] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.373020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.373394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.373409] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.373797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.374135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.374165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.374505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.374916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.374932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.375239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.375560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.375589] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.375939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.376268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.376298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.376684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.377068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.377098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.377498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.377905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.377920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.378299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.378665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.378696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.379083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.379410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.379440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.379852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.380178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.380207] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.380606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.381022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.381053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.381381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.381713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.381750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.382068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.382485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.382514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.382824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.383196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.383211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.952 qpair failed and we were unable to recover it. 00:30:04.952 [2024-02-13 08:30:38.383611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.952 [2024-02-13 08:30:38.383948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.383978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.384399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.384779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.384810] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.385194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.385605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.385635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.386092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.386478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.386508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.386916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.387317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.387346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.387769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.388150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.388180] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.388588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.388997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.389028] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.389440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.389846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.389877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.390283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.390607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.390636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.391009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.391418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.391448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.391840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.392245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.392275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.392606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.392935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.392950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.393330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.393689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.393720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.394129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.394458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.394497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.394807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.395187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.395217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.395635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.396073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.396103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.396523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.396845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.396876] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.397202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.397522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.397552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.397977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.398325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.398355] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.398772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.399162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.399192] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.399534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.399938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.399968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.400380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.400787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.400818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.401208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.401615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.401655] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.402065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.402471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.402500] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.402908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.403225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.403254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.403671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.404006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.404036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.404423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.404772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.404803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.405193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.405518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.953 [2024-02-13 08:30:38.405548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.953 qpair failed and we were unable to recover it. 00:30:04.953 [2024-02-13 08:30:38.405955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.406300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.406330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.406671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.407003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.407032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.407446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.407850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.407881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.408285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.408670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.408701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.409069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.409481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.409511] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.409918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.410228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.410258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.410672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.411080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.411110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.411498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.411907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.411938] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.412353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.412733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.412772] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.413138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.413468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.413499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.413841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.414247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.414277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.414687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.415096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.415111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.415417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.415826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.415856] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.416268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.416661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.416691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.417079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.417485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.417514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.417924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.418261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.418290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.418698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.419099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.419114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.419443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.419775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.419806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.420214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.420613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.420643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.420982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.421386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.421401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.421751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.422130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.422160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.422518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.422850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.422902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.423301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.423628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.423667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.424002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.424404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.424434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.424844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.425249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.425279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.425692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.426012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.426041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.426453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.426805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.426835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.954 qpair failed and we were unable to recover it. 00:30:04.954 [2024-02-13 08:30:38.427169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.954 [2024-02-13 08:30:38.427574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.427605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.428026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.428405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.428449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.428814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.429215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.429244] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.429662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.430055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.430084] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.430493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.430851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.430881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.431235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.431618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.431656] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.432091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.432506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.432536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.432957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.433358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.433388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.433806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.434137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.434166] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.434501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.434833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.434864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.435274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.435635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.435679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.436015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.436408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.436438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.436794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.437196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.437225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.437632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.437911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.437941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.438338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.438597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.438627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.439059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.439343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.439373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.439785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.440117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.440147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.440419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.440737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.440770] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.441070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.441383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.441413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.441749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.442081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.442110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.442444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.442855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.442892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.443325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.443642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.443684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.444116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.444554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.444584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.444948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.445354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.445384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.445712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.446115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.446130] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.955 qpair failed and we were unable to recover it. 00:30:04.955 [2024-02-13 08:30:38.446505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.955 [2024-02-13 08:30:38.446851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.446882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.447211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.447597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.447627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.448046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.448375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.448405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.448747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.449129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.449158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.449561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.449891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.449922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.450277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.450601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.450635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.451067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.451456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.451486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.451872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.452254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.452284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.452545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.452937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.452969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.453353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.453768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.453799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.454188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.454592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.454621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.455000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.455336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.455366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.455609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.456021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.456051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.456467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.456795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.456830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.457238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.457495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.457524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.457858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.458266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.458301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.458544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.458958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.458988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.459313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.459643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.459699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.460011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.460409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.460438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.460794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.461185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.461214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.461626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.462030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.462062] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.462468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.462851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.462883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.463246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.463583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.463612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.463879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.464147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.464176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.464593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.464995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.956 [2024-02-13 08:30:38.465026] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.956 qpair failed and we were unable to recover it. 00:30:04.956 [2024-02-13 08:30:38.465369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.465700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.465731] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.466143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.466469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.466499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.466902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.467233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.467262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.467602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.468038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.468069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.468511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.468852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.468883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.469293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.469693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.469724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.470143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.470537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.470567] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.470985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.471311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.471340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.471756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.472117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.472147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.472549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.472954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.472985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.473322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.473734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.473765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.474098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.474484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.474514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.474906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.475258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.475288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.475684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.475961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.475990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.476380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.476786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.476818] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.477226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.477626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.477667] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.478050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.478400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.478429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.478840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.479301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.479330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.479736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.480035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.480071] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.480424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.480802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.480834] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.481215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.481711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.481742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.482094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.482483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.482516] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.482857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.483199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.483230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.483632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.484023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.484055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.484472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.484879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.484910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.485205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.485656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.485688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.486071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.486474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.486503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.957 qpair failed and we were unable to recover it. 00:30:04.957 [2024-02-13 08:30:38.486900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.957 [2024-02-13 08:30:38.487224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.487258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.487594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.488022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.488053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.488336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.488607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.488636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.489065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.489398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.489428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.489769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.490159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.490189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.490520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.490880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.490911] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.491329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.491728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.491759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.492094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.492473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.492504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.492854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.493187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.493217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.493599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.494029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.494060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.494424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.494749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.494780] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.495193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.495612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.495641] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.496160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.496480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.496510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.496895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.497228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.497258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.497692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.498101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.498131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.498616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.498957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.498988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.499388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.499792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.499823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.500160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.500541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.500556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.500926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.501256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.501286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.501738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.502139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.502178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.502556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.502890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.502921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.503250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.503633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.503674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.504024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.504349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.504379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.504765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.505148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.505178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.505581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.505915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.505946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.506359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.506707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.506738] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.507132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.507511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.507541] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.958 [2024-02-13 08:30:38.507925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.508252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.958 [2024-02-13 08:30:38.508266] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.958 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.508576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.508932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.508963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.509342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.509682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.509713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.510051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.510454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.510483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.510820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.511228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.511259] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.511621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.511999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.512015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.512264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.512669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.512701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.512995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.513424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.513454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.513865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.514197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.514227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.514633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.514956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.514985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.515298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.515661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.515691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.516086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.516508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.516538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.516907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.517316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.517346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.517763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.518135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.518165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.518569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.518973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.519003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.519413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.519761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.519777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.520163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.520472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.520501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.520913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.521257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.521287] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.521724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.522023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.522053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.522463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.522831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.522861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.523132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.523490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.523520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.523855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.524125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.524154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.524438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.524777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.524807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.525066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.525294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.525323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.525664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.526022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.526052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.526334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.526747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.526796] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.527169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.527570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.527600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.527868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.528201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.528231] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.959 qpair failed and we were unable to recover it. 00:30:04.959 [2024-02-13 08:30:38.528590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.528992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.959 [2024-02-13 08:30:38.529023] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.529419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.529758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.529788] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.530195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.530587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.530617] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.530946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.531342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.531372] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.531759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.532070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.532099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.532469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.532807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.532838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.533172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.533507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.533536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.533774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.534199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.534229] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.534538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.534882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.534913] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.535300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.535662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.535693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.536066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.536459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.536488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.536914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.537243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.537258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.537639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.538029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.538058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.538387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.538737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.538752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.539136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.539455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.539484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.539900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.540329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.540358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.540743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.540995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.541025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.541360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2445144 Killed "${NVMF_APP[@]}" "$@" 00:30:04.960 [2024-02-13 08:30:38.541656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.541674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.542045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 08:30:38 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:04.960 [2024-02-13 08:30:38.542395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.542411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 08:30:38 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 08:30:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:04.960 [2024-02-13 08:30:38.542792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 08:30:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:04.960 [2024-02-13 08:30:38.543032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.543049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:04.960 [2024-02-13 08:30:38.543416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.543720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.543736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.544113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.544407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.544423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.960 qpair failed and we were unable to recover it. 00:30:04.960 [2024-02-13 08:30:38.544781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.960 [2024-02-13 08:30:38.545054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.545069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.545302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.545594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.545609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.545858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.546171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.546187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.546429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.546713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.546729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.547060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.547433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.547448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.547754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.548129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.548144] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.548372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.548681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.548697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.549069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.549444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.549460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 08:30:38 -- nvmf/common.sh@469 -- # nvmfpid=2445940 00:30:04.961 [2024-02-13 08:30:38.549764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 08:30:38 -- nvmf/common.sh@470 -- # waitforlisten 2445940 00:30:04.961 08:30:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:04.961 [2024-02-13 08:30:38.550068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.550085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 08:30:38 -- common/autotest_common.sh@817 -- # '[' -z 2445940 ']' 00:30:04.961 [2024-02-13 08:30:38.550391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 08:30:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.961 [2024-02-13 08:30:38.550743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 08:30:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.961 [2024-02-13 08:30:38.550759] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 08:30:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.961 [2024-02-13 08:30:38.551118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 08:30:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.961 [2024-02-13 08:30:38.551404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.551420] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 08:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:04.961 [2024-02-13 08:30:38.551776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.551922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.551937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.552296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.552668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.552684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.553062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.553363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.553378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.553739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.554026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.554041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.554333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.554640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.554663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.555016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.555395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.555410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.555692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.556001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.556016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.556256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.556606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.556621] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.556978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.557329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.557345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.557671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.557917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.557933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.558233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.558487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.558502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.558858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.559144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.559158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.559476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.559771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.559786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.559982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.560265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.560281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.961 [2024-02-13 08:30:38.560573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.560827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.961 [2024-02-13 08:30:38.560843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.961 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.561157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.561453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.561469] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.561754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.561932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.561947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.562174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.562490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.562504] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.562803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.563032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.563047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.563276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.563666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.563681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.564038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.564406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.564421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.564667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.565043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.565058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.565294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.565596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.565611] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.565936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.566295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.566309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.566620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.566867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.566882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.567234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.567538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.567553] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.568003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.568309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.568324] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.568560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.568911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.568926] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.569225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.569457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.569472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.569775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.570073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.570088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.570282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.570576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.570591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.570883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.571266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.571280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.571471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.571823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.571838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.572135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.572430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.572448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.572805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.573037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.573052] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.573352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.573670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.573686] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.574010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.574311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.574327] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.574680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.574987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.575002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.575297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.575701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.575717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.576104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.576327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.576342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.576595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.576799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.576815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.577104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.577469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.577484] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.962 qpair failed and we were unable to recover it. 00:30:04.962 [2024-02-13 08:30:38.577763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.962 [2024-02-13 08:30:38.578068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.578083] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.578449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.578734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.578751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.579064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.579313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.579328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.579654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.579899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.579914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.580210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.580425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.580439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.580726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.580960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.580974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.581251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.581488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.581502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.581846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.582143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.582158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.582445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.582674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.582689] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.582988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.583291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.583306] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.583587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.583823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.583838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.584124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.584292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.584310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.584679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.584890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.584905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.585199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.585450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.585464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.585754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.586120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.586135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.586446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.586760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.586776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.587089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.587378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.587394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.587682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.588010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.588024] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.588314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.588627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.588642] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.588964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.589318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.589332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.589557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.589848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.589863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.590051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.590341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.590359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.590600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.590876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.590892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.591279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.591585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.591599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.591897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.592174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.592189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.592542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.592833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.592848] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.593153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.593568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.593583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.963 qpair failed and we were unable to recover it. 00:30:04.963 [2024-02-13 08:30:38.593929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.963 [2024-02-13 08:30:38.594179] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:04.964 [2024-02-13 08:30:38.594222] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.964 [2024-02-13 08:30:38.594299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.594314] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.594656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.594944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.594959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.595262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.595564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.595579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.596049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.596411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.596426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.596803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.597023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.597038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.597313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.597660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.597676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.597963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.598313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.598328] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.598548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.598823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.598839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.599141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.599490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.599506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.599793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.600084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.600099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.600462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.600747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.600762] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.601105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.601333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.601348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.601727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.601960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.601974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.602267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.602542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.602556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.602862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.603155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.603170] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.603399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.603690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.603705] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.603896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.604122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.604138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.604431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.604771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.604787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.605072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.605292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.605307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.605596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.605964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.605980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.606484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.606499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.606725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.607005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.607020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.607343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.607682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.607697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.607922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.608374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.608389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.608633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.608954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.608969] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.609246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.609525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.609540] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.609814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.610108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.610122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.964 qpair failed and we were unable to recover it. 00:30:04.964 [2024-02-13 08:30:38.610396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.964 [2024-02-13 08:30:38.610707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.610723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.610941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.611169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.611184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.611466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.611719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.611734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.612021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.612304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.612320] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.612683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.612913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.612927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.613204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.613431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.613446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.613756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.613985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.614000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.614302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.614579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.614595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.614877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.615101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.615116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.615391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.615751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.615766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.616050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.616327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.616342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.616550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.616907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.616922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.617140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.617505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.617519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.617807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.618150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.618165] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.618338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.618624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.618638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.618923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.619137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.619151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.619460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.619615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.619630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.619861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.620148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.620162] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.620455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.620733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.620748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.621063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.621357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.621371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.621658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.621959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.621974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.622207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.622500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.622514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.622794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.622926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.622940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.623185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.623463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.623478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.965 qpair failed and we were unable to recover it. 00:30:04.965 [2024-02-13 08:30:38.623870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.965 [2024-02-13 08:30:38.624237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.624253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.966 [2024-02-13 08:30:38.624617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.624896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.624912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.625151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.625367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.625382] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.625731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.625987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.626003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.626292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.626634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.626655] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.626929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.627273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.627290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.627496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.627834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.627850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.628125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.628490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.628505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.628870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.629119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.629133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.629359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.629701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.966 [2024-02-13 08:30:38.629717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:04.966 qpair failed and we were unable to recover it. 00:30:04.966 [2024-02-13 08:30:38.629928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.630265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.630280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.630573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.630769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.630785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.631148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.631354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.631369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.631684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.631988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.632003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.632350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.632568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.632583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.632927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.633137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.633152] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.633424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.633718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.633733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.634009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.634244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.634258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.634490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.634624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.634640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.634958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.635269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.635283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.635516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.635873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.636136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.636311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.636326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.636698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.637039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.637053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.637294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.637662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.637677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.637978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.638247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.638261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.638534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.638818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.638833] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.639149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.639424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.639439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.639742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.639960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.639975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.640348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.640601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.640615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.640901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.641260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.641274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.641483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.641699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.641715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.237 qpair failed and we were unable to recover it. 00:30:05.237 [2024-02-13 08:30:38.642081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.237 [2024-02-13 08:30:38.642291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.642305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.642654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.643016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.643031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.643381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.643664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.643679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.643954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.644293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.644308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.644602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.644894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.644910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.645255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.645543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.645558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.645919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.646209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.646224] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.646503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.646859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.646874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.647223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.647438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.647452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.647772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.648069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.648084] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.648380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.648668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.648683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.649022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.649268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.649282] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.649654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.649942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.649956] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.650187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.650495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.650509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.650801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.651096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.651111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.651440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.651798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.651813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.652116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.652407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.652422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.652669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.652900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.652915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.653206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.653572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.653586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.653897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.654205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.654219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.654559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.654840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.654855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.655199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.655486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.655500] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.655849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.656197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.656212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.656451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.656791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.656806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.657093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.657439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.657454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.657793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.658090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.658105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.658386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.658677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.658692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.659062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.659253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.659268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.238 qpair failed and we were unable to recover it. 00:30:05.238 [2024-02-13 08:30:38.659489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.238 [2024-02-13 08:30:38.659872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.659887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.660118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.660399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.660413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.660777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.661061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.661076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.661271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.661542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.661556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.661854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.662143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.662158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.662460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.662797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.662812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.663098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.663387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.663401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.663535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.663805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.663820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.664111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.664413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.664427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.664787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.665154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.665169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.665385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.665663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.665678] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.665900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.666201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.666215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.666505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.666876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.666891] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.667185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.667494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.667508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.667796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.668139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.668156] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.668520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.668874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.668889] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.669178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.669471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.669486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.669781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.670274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.670288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.670506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.670793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.670808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.671097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.671415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.671430] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.671792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.672021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.672035] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.672424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.672763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.672777] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.673142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.673198] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.239 [2024-02-13 08:30:38.673554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.673570] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.673817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.674183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.674198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.674490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.674717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.674733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.674967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.675189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.675205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.675432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.675737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.675752] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.676097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.676379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.676394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.676691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.677038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.677053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.677298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.677585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.677600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.677813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.678152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.678167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.678528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.678814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.239 [2024-02-13 08:30:38.678830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.239 qpair failed and we were unable to recover it. 00:30:05.239 [2024-02-13 08:30:38.679118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.679395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.679410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.679698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.679990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.680005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.680309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.680527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.680542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.680888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.681124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.681139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.681415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.681690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.681707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.681939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.682162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.682178] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.682466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.682698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.682714] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.682919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.683216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.683230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.683450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.683727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.683742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.684131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.684342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.684357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.684632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.684932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.684947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.685239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.685471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.685485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.685823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.686184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.686202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.686501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.686848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.686864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.687099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.687339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.687353] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.687629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.687905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.687919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.688145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.688387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.688401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.688768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.688993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.689008] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.689286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.689588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.689603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.689968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.690272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.690286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.690500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.690800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.690815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.691043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.691268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.691283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.691631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.691947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.691965] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.692186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.692550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.692564] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.692791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.693083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.693097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.693379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.693670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.693685] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.693959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.694253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.694268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.694500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.694773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.694788] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.695064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.695404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.695418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.695631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.695995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.696010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.696304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.696601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.696615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.240 qpair failed and we were unable to recover it. 00:30:05.240 [2024-02-13 08:30:38.696898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.240 [2024-02-13 08:30:38.697176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.697191] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.697490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.697781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.697802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.698093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.698364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.698378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.698658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.699046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.699061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.699376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.699668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.699683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.699918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.700195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.700210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.700557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.700844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.700859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.701158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.701429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.701444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.701808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.702101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.702115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.702357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.702589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.702603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.702778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.702999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.703013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.703326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.703607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.703624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.703915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.704202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.704216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.704418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.704706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.704721] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.705031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.705262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.705276] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.705637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.705934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.705949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.706318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.706539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.706554] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.706862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.707152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.707168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.707525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.707762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.707778] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.708046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.708327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.708343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.708705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.709049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.709064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.709337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.709585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.709603] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.709981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.710272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.710288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.710660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.710889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.710905] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.711142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.711468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.711483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.241 qpair failed and we were unable to recover it. 00:30:05.241 [2024-02-13 08:30:38.711834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.241 [2024-02-13 08:30:38.712171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.712187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.712522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.712888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.712904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.713184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.713463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.713478] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.713695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.714060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.714075] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.714358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.714641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.714663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.715026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.715326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.715342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.715685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.715981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.715996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.716285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.716624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.716639] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.717003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.717289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.717305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.717591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.717807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.717823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.718130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.718424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.718439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.718823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.719050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.719064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.719348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.719684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.719700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.720060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.720356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.720370] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.721005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.721020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.721357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.721670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.721845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.722123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.722137] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.722443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.722676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.722690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.722914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.723129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.723143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.723434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.723774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.723789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.724015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.724350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.724364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.724574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.724797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.724811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.725156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.725446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.725460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.725679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.726013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.726027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.726316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.726657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.726671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.727033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.727318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.727332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.727628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.728016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.728031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.728379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.728735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.728749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.729053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.729351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.729366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.729702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.730004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.730019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.730376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.730662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.242 [2024-02-13 08:30:38.730677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.242 qpair failed and we were unable to recover it. 00:30:05.242 [2024-02-13 08:30:38.730967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.731270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.731285] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.731577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.731875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.731890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.732255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.732563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.732578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.732731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.733011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.733025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.733255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.733488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.733502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.733876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.734165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.734179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.734489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.734785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.734822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.735126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.735442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.735456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.735753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.736047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.736061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.736297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.736676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.736690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.737053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.737417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.737431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.737653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.737990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.738004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.738235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.738602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.738616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.738836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.739205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.739219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.739515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.739742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.739757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.739992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.740271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.740286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.740572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.740853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.740868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.741257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.741590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.741604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.741877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.742238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.742252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.742459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.742823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.742838] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.743129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.743249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.743263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.743625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.743852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.743867] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.744199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.744484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.744499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.744836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.745120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.745135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.745473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.745825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.745841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.746112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.746264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.243 [2024-02-13 08:30:38.746343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.746357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 [2024-02-13 08:30:38.746362] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.746371] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.243 [2024-02-13 08:30:38.746378] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.243 [2024-02-13 08:30:38.746485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.243 [2024-02-13 08:30:38.746657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.746592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.243 [2024-02-13 08:30:38.746697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.243 [2024-02-13 08:30:38.746698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.243 [2024-02-13 08:30:38.746958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.746973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.747261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.747535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.747550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.747850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.748213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.748227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.748572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.748919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.243 [2024-02-13 08:30:38.748934] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.243 qpair failed and we were unable to recover it. 00:30:05.243 [2024-02-13 08:30:38.749305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.749591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.749606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.749954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.750222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.750237] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.750533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.750822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.750837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.751177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.751483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.751498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.751871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.752211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.752225] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.752510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.752793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.752808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.753120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.753414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.753429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.753724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.754026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.754041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.754407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.754678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.754694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.755055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.755365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.755380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.755615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.755902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.755918] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.756214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.756549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.756565] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.756874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.757168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.757184] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.757458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.757797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.757814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0a8000b90 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.758201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.758452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.758474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.758735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.759049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.759065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.759406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.759713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.759729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.760014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.760352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.760366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.760645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.760988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.761002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.761389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.761600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.761615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.761980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.762263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.762279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.762640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.762931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.762946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.763221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.763559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.763575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.763941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.764238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.764254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.764627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.764857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.764872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.765405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.765421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.765763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.766048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.766064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.766447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.766812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.766827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.767058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.767418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.767434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.767772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.768055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.768070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.768355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.768663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.768677] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.244 qpair failed and we were unable to recover it. 00:30:05.244 [2024-02-13 08:30:38.768790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.244 [2024-02-13 08:30:38.769003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.769018] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.769379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.769604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.769620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.769943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.770309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.770325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.770612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.770823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.770843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.771134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.771440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.771456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.771760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.772034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.772051] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.772367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.772730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.772747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.772987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.773341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.773357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.773669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.773939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.773956] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.774232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.774583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.774600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.774910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.775225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.775241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.775533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.775849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.775866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.776146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.776528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.776546] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.776860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.777222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.777248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.777474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.777752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.777769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.778071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.778362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.778378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.778675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.778978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.778995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.779332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.779536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.779551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.779911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.780148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.780164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.780528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.780892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.780908] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.781151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.781460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.781475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.781692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.782002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.782017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.782303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.782592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.782607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.782826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.783100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.783115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.783468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.783805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.783820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.784109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.784279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.784294] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.784575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.784850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.784864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.785224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.785514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.785528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.245 qpair failed and we were unable to recover it. 00:30:05.245 [2024-02-13 08:30:38.785882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.245 [2024-02-13 08:30:38.786183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.786198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.786559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.786841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.786857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.787062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.787353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.787368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.787598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.787883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.787899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.788176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.788515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.788529] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.788833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.789168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.789182] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.789535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.789895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.789910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.790197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.790424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.790439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.790806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.791145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.791161] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.791377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.791734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.791750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.792036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.792307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.792322] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.792620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.793001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.793017] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.793249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.793600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.793614] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.793905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.794288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.794304] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.794589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.794956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.794972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.795198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.795558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.795572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.795812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.796055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.796070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.796352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.796691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.796707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.797020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.797314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.797329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.797626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.797950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.797967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.798281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.798515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.798531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.798739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.799095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.799111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.799413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.799777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.799793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.800136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.800426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.800440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.800802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.801028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.801043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.801320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.801587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.801602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.801941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.802304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.802318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.802588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.802885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.802900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.803213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.803483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.803497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.803797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.804163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.804177] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.804394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.804695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.804710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.805071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.805435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.246 [2024-02-13 08:30:38.805450] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.246 qpair failed and we were unable to recover it. 00:30:05.246 [2024-02-13 08:30:38.805765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.806017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.806032] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.806328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.806566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.806580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.806957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.807237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.807252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.807614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.807948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.807963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.808246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.808531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.808549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.808887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.809247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.809262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.809628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.809918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.809934] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.810216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.810519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.810534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.810833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.811132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.811146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.811532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.811834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.811849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.812129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.812468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.812483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.812803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.813102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.813117] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.813458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.813675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.813690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.813962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.814085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.814099] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.814463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.814755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.814770] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.815015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.815375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.815389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.815665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.816001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.816016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.816236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.816514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.816528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.816803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.817149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.817164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.817449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.817672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.817687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.817977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.818365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.818379] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.818602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.818905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.818920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.819213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.819485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.819500] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.819798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.820006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.820020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.820297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.820592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.820606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.820841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.821180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.821194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.821560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.821836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.821851] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.822198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.822537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.822551] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.822831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.823116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.823131] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.823338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.823691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.823706] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.824048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.824283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.824297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.247 qpair failed and we were unable to recover it. 00:30:05.247 [2024-02-13 08:30:38.824701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.247 [2024-02-13 08:30:38.825051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.825066] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.825369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.825602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.825616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.825910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.826269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.826283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.826572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.826913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.826928] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.827216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.827433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.827448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.827627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.827939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.827954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.828299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.828614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.828629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.828942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.829249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.829263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.829406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.829619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.829633] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.829901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.830227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.830246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.830352] nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7fe0 is same with the state(5) to be set 00:30:05.248 [2024-02-13 08:30:38.830721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.831016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.831029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.831297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.831594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.831604] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.831882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.832159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.832169] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.832468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.832854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.832863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.833134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.833486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.833496] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.833847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.834109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.834119] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.834257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.834636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.834650] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.834965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.835231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.835241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.835517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.835758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.835768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.836050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.836189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.836199] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.836472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.836764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.836774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.836991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.837254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.837263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.837538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.837826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.837836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.838105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.838326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.838336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.838599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.838874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.838884] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.839145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.839423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.839432] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.839601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.839930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.839940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.840213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.840563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.840572] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.840855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.841072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.841081] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.841353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.841510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.841519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.841790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.842118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.842127] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.842391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.842597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.248 [2024-02-13 08:30:38.842607] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.248 qpair failed and we were unable to recover it. 00:30:05.248 [2024-02-13 08:30:38.842960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.843240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.843250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.843511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.843731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.843741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.844018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.844389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.844398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.844657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.844931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.844940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.845217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.845446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.845455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.845735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.846005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.846014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.846361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.846652] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.846913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.847114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.847124] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.847454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.847731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.847741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.848081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.848435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.848445] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.848720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.848999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.849009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.849299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.849573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.849583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.849864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.850093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.850102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.850398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.850600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.850609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.850956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.851250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.851260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.851537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.851890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.851900] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.852123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.852488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.852498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.852723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.852997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.853006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.853448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.853662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.853671] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.854031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.854358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.854368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.854637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.854992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.855002] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.855242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.855522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.855531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.855882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.856243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.856253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.856547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.856805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.856815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.857110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.857320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.857329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.857543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.857813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.857823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.858150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.858420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.858429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.858694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.859025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.249 [2024-02-13 08:30:38.859034] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.249 qpair failed and we were unable to recover it. 00:30:05.249 [2024-02-13 08:30:38.859316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.859690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.859700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.860031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.860334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.860344] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.860518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.860718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.860728] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.861058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.861383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.861393] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.861696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.861871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.861881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.862098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.862426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.862435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.862728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.863218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863442] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.863628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.863970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.864126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.864486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.864495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.864824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.865177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.865187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.865538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.865897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.865907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.866234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.866465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.866474] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.866736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.867029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.867038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.867390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.867670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.867680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.868034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.868383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.868392] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.868669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.868899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.868909] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.869218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.869497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.869506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.869835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.870093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.870103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.870451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.870733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.870743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.871002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.871315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.871324] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.871700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.872048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.872058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.872327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.872677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.872687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.873013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.873240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.873250] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.873511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.873725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.873735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.873984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.874259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.874268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.874621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.874842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.874852] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.875130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.875386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.875395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.875610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.875868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.875877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.876090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.876439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.876448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.250 [2024-02-13 08:30:38.876683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.877020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.250 [2024-02-13 08:30:38.877030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.250 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.877268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.877564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.877573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.877924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.878138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.878147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.878359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.878650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.878659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.878935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.879195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.879204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.879487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.879820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.879829] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.880096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.880450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.880459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.880810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.881084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.881094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.881332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.881695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.881705] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.881971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.882244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.882253] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.882529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.882810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.882820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.883119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.883452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.883462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.883798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.884130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.884139] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.884466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.884688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.884698] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.884907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.885257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.885268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.885483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.885839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.885849] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.886076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.886337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.886346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.886635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.886960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.886971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.887295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.887666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.887676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.888013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.888285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.888295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.888597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.888942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.888952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.889168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.889394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.889403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.889776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.890037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.890047] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.890320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.890564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.890574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.890907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.891175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.891187] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.891532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.891830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.891839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.891998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.892286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.892296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.892577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.892795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.892805] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.893131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.893408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.893417] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.893767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.894091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.894100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.894381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.894705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.894715] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.895066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.895345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.895354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.251 qpair failed and we were unable to recover it. 00:30:05.251 [2024-02-13 08:30:38.895683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.251 [2024-02-13 08:30:38.896040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.896050] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.896426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.896749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.896758] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.897086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.897300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.897312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.897518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.897794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.897804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.898098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.898428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.898437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.898609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.898870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.898881] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.899158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.899357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.899367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.899587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.899846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.899855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.900112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.900391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.900401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.900616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.900876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.900886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.901156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.901430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.901439] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.901714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.902016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.902025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.902294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.902582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.902593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.902810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.903085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.903095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.903420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.903721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.903730] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.904018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.904238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.904247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.904583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.904852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.904862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.905137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.905431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.905441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.905717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.906067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.906076] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.906424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.906700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.906710] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.906985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.907261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.907270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.907609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.907892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.907901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.908213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.908562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.908571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.908903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.909231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.909241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.909565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.909785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.909795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.910061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.910409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.910418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.910687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.910967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.910977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.911245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.911530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.911539] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.911811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.912031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.912041] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.252 [2024-02-13 08:30:38.912309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.912588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.252 [2024-02-13 08:30:38.912598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.252 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.912856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.913115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.913126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.913392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.913691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.913702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.914053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.914326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.914337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.914670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.914959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.914970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.915328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.915620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.915630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.915918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.916276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.916286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.916621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.916960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.916970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.917243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.917425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.917434] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.917776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.918145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.918155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.918451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.918788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.918798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.919084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.919358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.919368] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.919641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.919992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.920003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.920357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.920733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.920743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.921035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.921246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.921256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.921534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.921887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.921897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.922008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.922282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.922292] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.922566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.922931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.923271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.923569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.923578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.923863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.924161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.924171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.541 qpair failed and we were unable to recover it. 00:30:05.541 [2024-02-13 08:30:38.924483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.541 [2024-02-13 08:30:38.924696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.924707] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.924984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.925316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.925326] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.925604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.925909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.925919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.926196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.926500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.926510] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.926786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.926975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.926985] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.927265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.927597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.927608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.927832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.928124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.928133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.928468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.928684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.928694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.929055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.929267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.929277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.929552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.929879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.929889] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.930214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.930427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.930437] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.930721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.931047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.931056] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.931328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.931599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.931608] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.931870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.932100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.932109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.932597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.932606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.932864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.933124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.933133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.933459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.933659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.933670] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.933900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.934178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.934188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.934398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.934658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.934668] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.934996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.935265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.935274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.935612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.935799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.935809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.936152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.936498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.936507] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.936786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.937129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.937138] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.937422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.937774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.937784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.938049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.938328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.938337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.938626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.938967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.938977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.939191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.939466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.939475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.542 qpair failed and we were unable to recover it. 00:30:05.542 [2024-02-13 08:30:38.939702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.542 [2024-02-13 08:30:38.940052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.940061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.940334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.940549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.940558] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.940848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.941147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.941157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.941508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.941834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.941844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.942198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.942423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.942433] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.942763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.943055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.943065] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.943335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.943542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.943552] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.943831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.944050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.944059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.944330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.944588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.944597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.944945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.945165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.945174] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.945405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.945684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.945693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.946043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.946254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.946263] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.946543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.946888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.946897] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.947184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.947440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.947449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.947775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.948103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.948113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.948328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.948626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.948636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.948851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.949197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.949206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.949537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.949833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.949843] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.950077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.950349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.950359] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.950583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.950794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.950803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.951101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.951378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.951388] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.951726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.951933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.951943] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.952292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.952659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.952669] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.953000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.953302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.953312] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.953587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.953848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.953858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.954128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.954472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.954482] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.954819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.955094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.955103] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.543 [2024-02-13 08:30:38.955238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.955588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.543 [2024-02-13 08:30:38.955598] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.543 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.955966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.956192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.956201] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.956473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.956798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.956808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.957158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.957439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.957448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.957777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.958045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.958055] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.958327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.958588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.958597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.958872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.959225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.959234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.959462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.959816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.959826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.960177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.960309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.960319] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.960668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.960952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.960962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.961179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.961487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.961497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.961726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.962078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.962088] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.962293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.962637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.962656] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.962871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.963200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.963210] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.963439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.963780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.963790] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.964068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.964411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.964421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.964691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.964923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.964933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.965211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.965570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.965580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.965863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.966207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.966216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.966573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.966927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.966936] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.967143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.967406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.967416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.967699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.967985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.967994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.968344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.968683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.968693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.968995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.969329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.969338] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.969665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.969937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.969947] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.970304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.970630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.970640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.970990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.971347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.971357] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.544 [2024-02-13 08:30:38.971709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.971955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.544 [2024-02-13 08:30:38.971964] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.544 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.972236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.972508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.972518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.972846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.973060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.973070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.973291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.973575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.973586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.973937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.974189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.974198] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.974552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.974816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.974826] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.975106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.975327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.975336] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.975663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.975965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.975974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.976246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.976533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.976542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.976903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.977204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.977214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.977488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.977836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.977846] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.978122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.978385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.978394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.978680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.978943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.978952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.979169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.979444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.979455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.979783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.980108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.980118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.980344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.980615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.980624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.980907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.981206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.981215] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.981553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.981905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.981915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.982243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.982454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.982464] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.982696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.982996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.983005] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.983280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.983497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.983506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.983712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.983985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.983995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.984344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.984526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.984535] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.984820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.985050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.985061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.985360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.985642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.985660] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.985960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.986288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.986298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.986654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.986848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.986858] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.545 qpair failed and we were unable to recover it. 00:30:05.545 [2024-02-13 08:30:38.987210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.545 [2024-02-13 08:30:38.987512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.987522] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.987753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.988081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.988090] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.988366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.988636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.988648] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.989000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.989212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.989221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.989573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.989849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.989859] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.990130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.990453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.990463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.990669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.990896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.990907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.991231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.991603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.991612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.991896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.992174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.992183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.992407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.992758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.992768] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.993106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.993378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.993387] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.993739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.994083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.994092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.994354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.994686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.994696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.995047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.995322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.995332] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.995629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.995993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.996003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.996210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.996540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.996549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.996829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.997097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.997107] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.997377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.997580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.997591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.997811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.998135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.998145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.998358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.998566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.998575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.998797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.999076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.999086] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.546 qpair failed and we were unable to recover it. 00:30:05.546 [2024-02-13 08:30:38.999386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.999739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.546 [2024-02-13 08:30:38.999749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:38.999955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.000261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.000271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.000620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.000921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.000931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.001225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.001523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.001533] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.001883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.002157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.002167] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.002442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.002792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.002802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.003121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.003450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.003459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.003812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.004049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.004059] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.004425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.004726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.004736] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.005010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.005363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.005372] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.005601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.005900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.005910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.006267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.006547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.006556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.006785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.006999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.007009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.007391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.007757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.007766] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.008052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.008370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.008380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.008638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.008965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.008974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.009261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.009617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.009626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.009930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.010393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.010403] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.010741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.011082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.011092] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.011461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.011739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.011748] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.011957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.012284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.012293] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.012549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.012835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.012844] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.013202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.013364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.013373] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.013652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.013856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.013866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.014215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.014515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.014524] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.014785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.015151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.015160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.015510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.015805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.015815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.016093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.016417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.016426] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.547 qpair failed and we were unable to recover it. 00:30:05.547 [2024-02-13 08:30:39.016684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.547 [2024-02-13 08:30:39.016943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.016953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.017234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.017578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.017588] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.017875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.018154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.018164] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.018518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.018812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.018821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.019109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.019380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.019390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.019604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.019877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.019886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.020000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.020260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.020270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.020540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.020821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.020831] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.021174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.021357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.021366] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.021743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.022011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.022021] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.022235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.022583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.022592] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.022957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.023281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.023291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.023506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.023858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.023868] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.024157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.024438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.024448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.024665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.024958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.024968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.025273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.025547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.025556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.025942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.026164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.026173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.026364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.026638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.026655] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.027000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.027218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.027227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.027449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.027708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.027718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.027946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.028227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.028236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.028561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.028850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.028860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.029082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.029354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.029364] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.029589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.029818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.029828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.030026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.030313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.030323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.030535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.030823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.030832] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.031190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.031415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.031424] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.548 [2024-02-13 08:30:39.031688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.031854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.548 [2024-02-13 08:30:39.031864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.548 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.032080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.032406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.032415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.032757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.033027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.033036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.033423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.033779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.033789] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.034062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.034410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.034419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.034698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.034995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.035004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.035268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.035597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.035606] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.035911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.036250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.036259] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.036474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.036701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.036711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.036991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.037264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.037273] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.037558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.037827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.037837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b0000b90 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.038200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.038498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.038517] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.038893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.039182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.039197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.039489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.039719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.039735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.040092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.040426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.040440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.040725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.041039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.041054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.041394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.041701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.041717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.042055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.042344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.042358] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.042533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.042807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.042822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.043185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.043467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.043483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.043697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.044083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.044098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.044401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.044680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.044697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.045078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.045346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.045360] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.045654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.045944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.045959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.046180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.046421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.046435] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.046670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.046952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.046966] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.047249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.047517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.047532] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.549 qpair failed and we were unable to recover it. 00:30:05.549 [2024-02-13 08:30:39.047808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.048156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.549 [2024-02-13 08:30:39.048171] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.048464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.048827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.048842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.049139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.049491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.049505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.049716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.050015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.050029] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.050320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.050658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.050676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.051015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.051247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.051262] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.051598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.051900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.051915] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.052276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.052560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.052575] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.052859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.053145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.053159] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.053448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.053812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.053827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.054193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.054543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.054557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.054858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.055140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.055155] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.055517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.055804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.055819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.056163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.056516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.056531] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.056806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.057042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.057057] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.057447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.057679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.057694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.058061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.058271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.058286] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.058642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.058916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.058930] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.059221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.059500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.059515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.059839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.060125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.060140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.060440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.060798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.060814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.061115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.061471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.061485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.061822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.062126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.062140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.062357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.062720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.062734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.550 [2024-02-13 08:30:39.062964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.063260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.550 [2024-02-13 08:30:39.063274] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.550 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.063629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.063924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.063939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.064102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.064437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.064452] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.064764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.065053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.065067] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.065434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.065715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.065729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.066021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.066294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.066308] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.066665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.067045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.067060] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.067392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.067661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.067676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.067901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.068237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.068251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.068560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.068838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.068853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.069216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.069430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.069445] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.069731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.069969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.069984] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.070190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.070483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.070497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.070940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.071189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.071204] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.071440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.071679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.071694] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.071928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.072238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.072252] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.072553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.072871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.072887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.073174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.073407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.073421] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.073790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.073965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.073980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.074209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.074443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.074458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.074741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.075078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.075093] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.075362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.075650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.075665] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.076025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.076351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.076365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.076584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.076865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.076879] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.077114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.077244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.077258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.077567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.077907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.077922] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.078201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.078560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.078574] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.551 qpair failed and we were unable to recover it. 00:30:05.551 [2024-02-13 08:30:39.078847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.551 [2024-02-13 08:30:39.079013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.079027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.079367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.079707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.079723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.080015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.080326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.080340] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.080518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.080903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.080919] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.081201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.081481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.081498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.081778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.082118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.082133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.082480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.082771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.082786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.083087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.083423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.083438] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.083757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.084114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.084128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.084341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.084681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.084696] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.085053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.085334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.085349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.085579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.085960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.085974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.086239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.086601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.086615] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.086979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.087268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.087283] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.087500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.087724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.087742] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.088136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.088404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.088419] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.088789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.089074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.089089] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.089341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.089632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.089650] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.090011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.090295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.090310] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.090526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.090839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.090855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.091149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.091429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.091444] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.091797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.092089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.092104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.092394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.092622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.092636] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.092945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.093257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.093272] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.093566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.093806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.093821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.094049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.094433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.094447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.094783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.095000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.095015] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.095238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.095467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.552 [2024-02-13 08:30:39.095481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.552 qpair failed and we were unable to recover it. 00:30:05.552 [2024-02-13 08:30:39.095716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.095918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.095933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.096206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.096563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.096577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.096863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.097170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.097185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.097466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.097776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.097791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.098005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.098276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.098290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.098509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.098736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.098751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.099092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.099383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.099398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.099688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.099982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.099997] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.100284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.100666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.100682] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.100989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.101282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.101297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.101546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.101755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.101770] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.102046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.102331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.102346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.102685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.102909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.102924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.103198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.103481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.103495] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.103892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.104181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.104196] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.104435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.104664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.104679] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.104972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.105194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.105209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.105499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.105944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.105959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.106233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.106501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.106515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.106879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.107217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.107232] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.107448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.107718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.107733] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.108017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.108219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.108234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.108438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.108726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.108741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.108962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.109233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.109248] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.109533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.109769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.109784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.110001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.110269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.110284] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.110561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.110917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.110932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.553 [2024-02-13 08:30:39.111069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.111346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.553 [2024-02-13 08:30:39.111362] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.553 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.111644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.111936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.111951] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.112164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.112375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.112390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.112730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.112909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.112924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.113270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.113493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.113508] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.113738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.114021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.114036] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.114394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.114686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.114702] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.114915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.115130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.115145] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.115450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.115674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.115690] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.115996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.116281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.116296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.116578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.116790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.116808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.117101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.117332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.117347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.117686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.117978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.117993] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.118285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.118620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.118634] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.118899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.119278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.119297] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.119611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.120022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.120038] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.120335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.120609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.120624] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.120916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.121129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.121143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.121375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.121591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.121605] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.121885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.122221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.122236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.122450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.122731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.122747] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.122980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.123193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.123209] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.123429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.123703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.123718] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.124081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.124310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.124325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.124628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.124845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.124860] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.125085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.125360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.125374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.125599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.125915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.125931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.126298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.126606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.126620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.554 qpair failed and we were unable to recover it. 00:30:05.554 [2024-02-13 08:30:39.126858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.554 [2024-02-13 08:30:39.127134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.127149] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.127360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.127694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.127711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.127994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.128236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.128251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.128549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.128677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.128692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.129033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.129267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.129281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.129560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.129840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.129855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.130169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.130294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.130309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.130530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.130804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.130820] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.131029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.131310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.131325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.131601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.131826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.131841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.132183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.132396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.132411] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.132777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.133127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.133141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.133359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.133562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.133578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.133818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.134103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.134118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.134335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.134608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.134623] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.134845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.135070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.135085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.135315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.135635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.135659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.135881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.136115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.136129] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.136411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.136797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.136812] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.137064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.137280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.137295] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.137513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.137786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.137801] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.138024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.138252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.138267] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.138455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.138771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.138786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.555 qpair failed and we were unable to recover it. 00:30:05.555 [2024-02-13 08:30:39.139091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.555 [2024-02-13 08:30:39.139308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.139323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.139540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.139778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.139793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.140083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.140353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.140367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.140569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.140848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.140863] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.141232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.141522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.141537] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.141761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.142030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.142045] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.142391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.142623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.142638] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.142980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.143203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.143218] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.143500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.143799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.143814] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.144114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.144447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.144462] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.144672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.144886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.144901] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.145182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.145515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.145530] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.145888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.146158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.146173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.146489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.146794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.146809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.147095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.147386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.147401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.147694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.147948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.147963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.148191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.148479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.148494] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.148716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.148999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.149013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.149362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.149633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.149653] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.150000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.150297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.150311] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.150544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.150766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.150793] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.151074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.151355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.151369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.151643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.151889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.151904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.152220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.152519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.152534] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.152816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.153107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.153122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.153348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.153641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.153662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.153848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.154131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.154146] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.556 [2024-02-13 08:30:39.154444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.154676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.556 [2024-02-13 08:30:39.154691] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.556 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.154938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.155236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.155251] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.155474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.155697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.155712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.155990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.156260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.156275] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.156565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.156783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.156799] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.157015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.157308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.157323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.157544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.157767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.157782] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.158013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.158314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.158329] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.158567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.158793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.158808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.159027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.159304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.159319] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.159537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.159825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.159841] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.160044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.160323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.160337] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.160579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.160790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.160806] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.161083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.161355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.161369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.161589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.161802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.161817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.162090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.162293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.162309] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.162527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.162818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.162833] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.163064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.163275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.163290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.163582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.163862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.163877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.164172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.164409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.164423] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.164765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.164997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.165012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.165321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.165432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.165447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.165725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.166001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.166016] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.166298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.166521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.166536] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.166831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.167143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.167158] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.167502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.167740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.167755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.167970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.168210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.168224] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.168498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.168723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.168739] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.557 qpair failed and we were unable to recover it. 00:30:05.557 [2024-02-13 08:30:39.169037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.557 [2024-02-13 08:30:39.169263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.169278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.169555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.169836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.169853] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.170136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.170363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.170378] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.170599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.170937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.170953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.171176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.171452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.171468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.171696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.171984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.171999] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.172207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.172414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.172428] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.172703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.172983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.172998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.173220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.173506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.173521] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.173859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.174143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.174160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.174447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.174669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.174684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.174913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.175127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.175142] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.175355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.175587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.175602] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.175782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.176054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.176069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.176383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.176603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.176618] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.176872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.177089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.177104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.177341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.177568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.177583] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.177800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.178022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.178037] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.178257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.178540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.178556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.178779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.179028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.179043] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.179262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.179487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.179502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.179712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.180346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180577] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.180754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.180998] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.181200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.181402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.181418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.181708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.181979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.181994] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.182276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.182486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.182502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.182730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.183033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.558 [2024-02-13 08:30:39.183048] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.558 qpair failed and we were unable to recover it. 00:30:05.558 [2024-02-13 08:30:39.183265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.183488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.183503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.183719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.183953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.183967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.184260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.184471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.184486] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.184664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.184887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.184902] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.185121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.185341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.185356] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.185582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.185809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.185824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.186100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.186385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.186400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.186688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.186983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.187001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.187224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.187438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.187454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.187747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.187927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.187942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.188232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.188511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.188526] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.188742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.188964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.188979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.189204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.189484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.189499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.189724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.189947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.189962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.190239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.190462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.190477] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.190763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.190975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.190990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.191326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.191537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.191553] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.191680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.191957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.191975] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.192208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.192431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.192446] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.192732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.193043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.193058] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.193289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.193582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.193597] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.193825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.194035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.194049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.194331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.194542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.194557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.194829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.195101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.195116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.195328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.195542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.195557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.195830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.196100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.196115] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.196321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.196535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.559 [2024-02-13 08:30:39.196550] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.559 qpair failed and we were unable to recover it. 00:30:05.559 [2024-02-13 08:30:39.196673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.196952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.196971] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.197152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.197443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.197458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.197745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198031] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.198249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198483] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.198604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.198893] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.199192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.199397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.199412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.199690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.199974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.199988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.200220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.200379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.200394] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.200598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.200811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.200827] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.201039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.201247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.201261] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.201607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.201827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.201845] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.202081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.202298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.202313] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.202600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.202821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.202837] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.203056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.203365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.203380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.203591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.203809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.203824] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.204034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.204253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.204268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.204553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.204779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.204794] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.205021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.205232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.205247] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.205612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.205818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.205833] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.206107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.206403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.206418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.206631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.206857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.206873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.207096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.207382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.560 [2024-02-13 08:30:39.207396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.560 qpair failed and we were unable to recover it. 00:30:05.560 [2024-02-13 08:30:39.207678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.207947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.207962] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.208191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.208469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.208485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.208702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.208925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.208940] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.209157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.209366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.209380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.209589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.209861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.209877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.210174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.210401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.210415] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.210697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.210939] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.211213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.211416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.211431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.211645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.211935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.211950] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.212188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.212418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.212433] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.212709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.212916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.212931] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.213226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.213450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.213465] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.213768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.213975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.213990] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.214223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.214494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.214509] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.214712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.214944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.214959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.215174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.215453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.215469] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.561 [2024-02-13 08:30:39.215696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.215922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.561 [2024-02-13 08:30:39.215937] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.561 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.216216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.216331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.216345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.831 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.216700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.216981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.216995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.831 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.217237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.217454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.217469] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.831 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.217753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.217958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.217973] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.831 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.218246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.218454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.218468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.831 qpair failed and we were unable to recover it. 00:30:05.831 [2024-02-13 08:30:39.218682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.831 [2024-02-13 08:30:39.218966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.218980] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.219218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.219446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.219460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.219823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.220125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.220140] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.220421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.220757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.220773] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.220988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.221328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.221343] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.221564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.221781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.221797] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.222100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.222385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.222401] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.222677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.222967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.222983] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.223315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.223595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.223610] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.223845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.224067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.224082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.224431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.224708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.224724] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.224946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.225282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.225296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.225601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.225814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.225830] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.226101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.226380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.226395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.226689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.226972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.226987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.227217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.227584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.227599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.227902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.228202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.228217] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.228503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.228794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.228809] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.229024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.229249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.229264] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.229471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.229684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.229700] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.229907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.230227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.230242] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.230546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.230826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.230842] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.231125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.231348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.231363] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.231638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.231874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.231889] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.832 [2024-02-13 08:30:39.232230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.232459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.832 [2024-02-13 08:30:39.232473] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.832 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.232692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.232964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.232979] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.233205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.233425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.233440] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.233660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.233831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.233847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.234228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.234512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.234527] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.234717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.234985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.235000] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.235292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.235579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.235593] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.235812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.236053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.236068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.236353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.236571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.236586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.236874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.237097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.237113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.237383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.237607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.237622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.237843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.238179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.238194] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.238435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.238722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.238737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.239057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.239279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.239296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.239465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.239697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.239713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.240054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.240263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.240278] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.240552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.240768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.240785] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.241011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.241199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.241214] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.241553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.241775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.241791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.242020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.242241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.242256] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.242483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.242784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.242800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.243092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.243375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.243390] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.243609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.243835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.243850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.244130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.244528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.244548] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.244764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.245057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.245072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.245277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.245577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.245591] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.833 qpair failed and we were unable to recover it. 00:30:05.833 [2024-02-13 08:30:39.245809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.246083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.833 [2024-02-13 08:30:39.246098] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.246369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.246605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.246620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.246840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.247132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.247147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.247400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.247697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.247713] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.248055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.248220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.248234] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.248446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.248657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.248673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.249010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.249285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.249300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.249572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.249860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.249878] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.250114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.250346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.250361] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.250584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.250919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.250934] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.251212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.251428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.251448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.251691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.251923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.251938] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.252330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.252613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.252627] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.252905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.253120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.253135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.253472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.253688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.253704] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.253986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.254254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.254269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.254488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.254699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.254725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.254955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.255241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.255258] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.255537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.255760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.255775] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.255979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.256192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.256208] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.256443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.256722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.256737] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.256955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.257226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.257241] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.257360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.257594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.257609] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.834 qpair failed and we were unable to recover it. 00:30:05.834 [2024-02-13 08:30:39.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.834 [2024-02-13 08:30:39.258158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.258173] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.258400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.258625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.258640] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.258934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.259216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.259230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.259513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.259745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.259760] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.260096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.260310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.260325] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.260609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.260787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.260802] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.261079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.261305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.261319] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.261542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.261818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.261834] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.262065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.262341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.262355] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.262571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.262912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.262927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.263286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.263569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.263584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.263809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.263953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.263968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.264182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.264407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.264422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.264703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.264932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.264946] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.265217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.265487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.265502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.265790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.266143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.266157] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.266389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.266669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.266684] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.266897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.267180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.267195] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.267543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.267906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.267921] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.268212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.268487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.268502] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.268789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.268995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.269010] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.269374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.269607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.269622] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.269854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.270212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.270226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.270448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.270666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.270681] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.270954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.271068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.271082] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.271362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.271601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.271616] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.835 qpair failed and we were unable to recover it. 00:30:05.835 [2024-02-13 08:30:39.271904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.835 [2024-02-13 08:30:39.272139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.272154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.272496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.272729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.272743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.272968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.273239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.273254] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.273525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.273824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.273839] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.274056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.274327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.274342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.274556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.274841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.274857] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.275130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.275434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.275448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.275731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.275941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.275957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.276179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.276392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.276407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.276677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.276908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.276923] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.277213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.277484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.277498] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.277720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.277998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.278013] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.278285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.278488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.278503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.278732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.278962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.278977] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.279276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.279490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.279505] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.279732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.279958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.279972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.280242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.280467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.280481] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.280705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.281013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.281027] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.281306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.281522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.281537] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.281832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.282123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.282141] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.282370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.282680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.282695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.282923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.283138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.283153] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.283380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.283504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.283519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.283807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.284029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.836 [2024-02-13 08:30:39.284044] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.836 qpair failed and we were unable to recover it. 00:30:05.836 [2024-02-13 08:30:39.284253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.284458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.284472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.284699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.284985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.285001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.285230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.285433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.285447] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.285765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.285935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.285949] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.286183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.286398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.286412] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.286644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.286889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.286904] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.287122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.287401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.287416] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.287638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.287877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.287892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.288107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.288446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.288460] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.288681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.288892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.288908] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.289136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.289412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.289427] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.289717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.289995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.290009] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.290288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.290646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.290666] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.290890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.291114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.291128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.291354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.291630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.291645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.291775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.291988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.292003] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.292222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.292535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.292549] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.292833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.293171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.293185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.293407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.293680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.293695] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.837 qpair failed and we were unable to recover it. 00:30:05.837 [2024-02-13 08:30:39.293910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.294133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.837 [2024-02-13 08:30:39.294147] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.294362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.294584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.294599] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.294883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.295100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.295114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.295402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.295615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.295629] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.295909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.296197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.296212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.296501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.296789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.296804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.297014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.297403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.297418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.297656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.297945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.297960] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.298184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.298387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.298402] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.298690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.298903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.298917] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.299146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.299439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.299454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.299764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.300053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.300068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.300288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.300491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.300506] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.300736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301019] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.301257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301480] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.301690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.301924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.302202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.302433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.302448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.302657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.302867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.302882] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.303169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.303391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.303405] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.303638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.303872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.303887] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.304101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.304325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.304341] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.304692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.304971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.304986] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.305261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.305488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.305503] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.305775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.306049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.838 [2024-02-13 08:30:39.306064] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.838 qpair failed and we were unable to recover it. 00:30:05.838 [2024-02-13 08:30:39.306285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.306557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.306571] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.306861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.307098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.307113] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.307427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.307629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.307644] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.307920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.308106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.308123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.308342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.308644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.308673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.308897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.309121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.309135] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.309307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.309530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.309544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.309829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.310111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.310125] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.310344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.310634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.310652] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.310941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.311170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.311185] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.311465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.311754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.311769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.312053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.312368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.312383] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.312615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.312957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.312972] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.313265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.313634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.313654] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.313871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.314080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.314095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.314326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.314667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.314682] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.314898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.315168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.315183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.315472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.315683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.315699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.316004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.316276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.316290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.316570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.316806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.316822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.316959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.317188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.317202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.317419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.317702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.317717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.317988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.318213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.318227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.318507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.318789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.318804] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.319103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.319333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.319348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.839 [2024-02-13 08:30:39.319583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.319857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.839 [2024-02-13 08:30:39.319872] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.839 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.320087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.320290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.320305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.320525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.320708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.320723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.321016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.321308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.321323] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.321533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.321755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.321770] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.322061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.322333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.322347] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.322620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.322847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.322862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.323085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.323374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.323389] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.323600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.323876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.323892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.324130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.324354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.324369] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.324571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.324784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.324800] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.325094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.325370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.325385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.325606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.325819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.325835] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.326047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.326206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.326221] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.326497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.326799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.326815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.326990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.327256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.327271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.327488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.327775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.327791] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.328072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.328285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.328300] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.328507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.328736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.328750] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.328989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.329225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.329240] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.329544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.329821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.329836] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.330052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.330332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.330346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.330633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.330936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.330952] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.331177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.331352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.331367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.331662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.331955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.331970] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.332199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.332473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.332488] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.840 qpair failed and we were unable to recover it. 00:30:05.840 [2024-02-13 08:30:39.332721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.332943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.840 [2024-02-13 08:30:39.332959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.333238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.335910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.335927] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.336270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.336634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.336662] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.336964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.337253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.337271] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.337494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.337805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.337821] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.338053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.338293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.338307] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.338528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.338813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.338828] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.339166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.339505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.339520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.339809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.340082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.340097] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.340336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.340671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.340687] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.341004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.341221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.341236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.341641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.341884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.341899] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.342235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.342447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.342463] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.342734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.343024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.343039] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.343319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.343523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.343538] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.343780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.343989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.344004] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.344279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.344485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.344501] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.344795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.345262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.345277] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.345904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.345920] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.346221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.346440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.346456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.346572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.346908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.346924] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.347152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.347434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.347449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.347682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.347959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.347974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.348111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.348315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.348330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.348558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.348848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.348864] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.349209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.349427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.349441] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.841 qpair failed and we were unable to recover it. 00:30:05.841 [2024-02-13 08:30:39.349721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.841 [2024-02-13 08:30:39.349944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.349959] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.350266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.350551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.350566] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.350853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.351191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.351206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.351501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.351725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.351743] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.351995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.352200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.352216] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.352582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.352857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.352873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.353134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.353453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.353468] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.353759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.353973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.353988] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.354167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.354432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.354448] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.354813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.355339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355564] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.355783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.355996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.356254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.356418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.356433] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.356657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.356861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.356877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.357097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.357380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.357395] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.357615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.357894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.357910] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.358352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.358367] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.358576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.358745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.358771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.359007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.359253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.359268] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.359439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.359739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.359754] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.359970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.360173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.360188] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.842 [2024-02-13 08:30:39.360470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.360598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.842 [2024-02-13 08:30:39.360614] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.842 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.360811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.361059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.361074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.361286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.361503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.361518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.361793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.362015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.362033] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.362305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.362667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.362683] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.362939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.363159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.363175] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.363461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.363747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.363763] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.364063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.364303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.364321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.364597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.364925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.364941] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.365175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.365390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.365404] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.365628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.366053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.366068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.366431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.366725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.366741] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.366986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.367208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.367224] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.367458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.367752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.367767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.367982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.368195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.368211] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.368334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.368635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.368656] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.369013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.369334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.369349] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.369587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.369858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.369877] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.370087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.370369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.370384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.370619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.370917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.370933] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.371152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.371382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.371396] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.371628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.371989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.372006] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.372233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.372443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.372458] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.372799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.372981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.372996] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.373360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.373598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.373613] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.373759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.373980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.373995] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.843 [2024-02-13 08:30:39.374225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.374459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.843 [2024-02-13 08:30:39.374475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.843 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.374706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.374929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.374945] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.375287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.375558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.375573] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.375875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376054] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.376287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.376721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.376948] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.377287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.377597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.377612] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.377840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.378111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.378126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.378348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.378585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.378600] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.378814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.379053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.379068] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.379298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.379504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.379518] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.379730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.380080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.380095] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.380463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.380807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.380823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.381038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.381315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.381331] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.381580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.381801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.381817] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.382120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.382359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.382374] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.382653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.382875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.382890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.383109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.383403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.383418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.383665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.383948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.383963] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.384182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.384531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.384547] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.384784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.384893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.384907] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.385187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.385406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.385422] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.385659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.385939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.385954] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.386296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.386529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.386544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.386750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.386958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.386974] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.387200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.387512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.387528] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.387758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.388056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.844 [2024-02-13 08:30:39.388072] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.844 qpair failed and we were unable to recover it. 00:30:05.844 [2024-02-13 08:30:39.388303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.388529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.388544] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.388919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.389198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.389213] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.389501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.389796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.389811] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.390046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.390276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.390291] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.390499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.390780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.390795] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.391083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.391371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.391385] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.391508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.391793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.391808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.392084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.392334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.392348] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.392701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.392939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.392953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.393182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.393385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.393399] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.393759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.394054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.394069] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.394370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.394677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.394692] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.394920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.395187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.395202] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.395514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.395705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.395720] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.396002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.396216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.396230] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.396507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.396844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.396862] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.397121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.397330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.397345] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.397558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.397793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.397808] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.398082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.398351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.398365] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.398663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.398877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.398892] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.399168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.399441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.399456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.399727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.400000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.400014] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.400245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.400569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.400584] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.400868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.401103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.401118] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.401399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.401686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.845 [2024-02-13 08:30:39.401701] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.845 qpair failed and we were unable to recover it. 00:30:05.845 [2024-02-13 08:30:39.401933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.402211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.402226] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.402514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.402638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.402717] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.403064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.403399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.403413] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.403634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.403917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.403932] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.404150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.404386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.404400] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.404614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.404846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.404861] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.405154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.405279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.405294] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.405567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.405807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.405822] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.406038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.406270] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.406536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.406798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.406813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.407083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.407368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.407384] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.407729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.408088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.408102] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.408310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.408527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.408542] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.408827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.409055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.409070] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.409291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.409644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.409663] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.409941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.410073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.410087] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.410455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.410740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.410755] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.410984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.411265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.411279] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.411503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.411804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.411819] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.412044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.412273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.412288] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.412583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.412808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.412823] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.413097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.413315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.413330] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.413550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.413832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.413847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.414218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.414439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.414454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.414732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.415153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.415168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.846 [2024-02-13 08:30:39.415464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.415761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.846 [2024-02-13 08:30:39.415776] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.846 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.416144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.416482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.416497] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.416841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.417204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.417219] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.417496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.417851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.417866] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.418168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.418441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.418456] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.418752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.419090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.419105] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.419388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.419684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.419699] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.419921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.420140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.420154] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.420434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.420719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.420734] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.421012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.421183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.421197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.421427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.421665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.421680] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.421905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.422112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.422126] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.422342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.422581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.422596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.422830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.423109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.423123] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.423356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.423572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.423586] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.423708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 08:30:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:05.847 [2024-02-13 08:30:39.423942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.423958] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 08:30:39 -- common/autotest_common.sh@850 -- # return 0 00:30:05.847 [2024-02-13 08:30:39.424234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.424464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.424479] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 08:30:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:05.847 08:30:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:05.847 [2024-02-13 08:30:39.424786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.424952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.424967] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:05.847 [2024-02-13 08:30:39.425181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.425392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.425407] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.425631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.425840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.425855] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.847 [2024-02-13 08:30:39.426173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.426444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.847 [2024-02-13 08:30:39.426459] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.847 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.426731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.426897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.426912] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.427094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.427484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.427499] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.427738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.427930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.427944] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.428161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.428306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.428321] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.428633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.428996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.429012] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.429240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.429542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.429557] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.429848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.430118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.430133] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.430424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.430642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.430661] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.430881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.431100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.431114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.431391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.431615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.431630] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.431926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.432161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.432176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.432395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.432713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.432729] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.432844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.433070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.433085] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.433319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.433659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.433674] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.433888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.434178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.434193] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.434483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.434731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.434751] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.435040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.435282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.435298] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.435524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.435757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.435774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.436133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.436413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.436429] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.436658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.436899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.436914] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.437147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.437383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.437398] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.437570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.437780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.437798] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.438024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.438275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.438290] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.438508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.438736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.438753] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.439084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.439302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.848 [2024-02-13 08:30:39.439318] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.848 qpair failed and we were unable to recover it. 00:30:05.848 [2024-02-13 08:30:39.439551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.439770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.439787] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.440073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.440254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.440269] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.440494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.440770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.440786] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.441063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.441190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.441205] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.441416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.441645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.441673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.441904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.442119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.442134] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.442421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.442695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.442711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.443063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.443348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.443363] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.443645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.443871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.443886] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.444104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.444319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.444334] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.444568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.444799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.444815] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.445046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.445281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.445296] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.445523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.445741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.445757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.445973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.446197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.446212] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.446428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.446657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.446673] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.446988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.447302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.447317] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.447589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.447927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.447942] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.448150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.448365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.448380] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.448596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.448831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.448847] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.449065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.449289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.449305] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.449420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.449644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.449664] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.449950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.450164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.450179] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.450475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.450696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.450712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.451015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.451220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.451235] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.451454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.451566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.451581] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.849 qpair failed and we were unable to recover it. 00:30:05.849 [2024-02-13 08:30:39.451803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.849 [2024-02-13 08:30:39.452015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.452030] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.452321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.452541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.452556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.452783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.453127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.453143] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.453381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.453730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.453746] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.453964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.454182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.454197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.454417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.454611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.454626] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.454811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.455034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.455049] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.455338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.455627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.455643] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.455868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.456096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.456111] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.456337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.456560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.456576] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.456807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.457089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.457104] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.457405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.457630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.457645] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.457880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.458167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.458183] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.458413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.458709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.458725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.458856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459110] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.459386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459620] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.459745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.459978] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 08:30:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.850 [2024-02-13 08:30:39.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 08:30:39 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.850 [2024-02-13 08:30:39.460562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.460579] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.850 [2024-02-13 08:30:39.460814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:05.850 [2024-02-13 08:30:39.461098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.461114] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.461353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.461565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.461580] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.461881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.462101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.462116] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.462238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.462461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.462475] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.462689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.462986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.463001] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.463230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.463457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.463472] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.463694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.463923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.463938] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.850 qpair failed and we were unable to recover it. 00:30:05.850 [2024-02-13 08:30:39.464163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.850 [2024-02-13 08:30:39.464440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.464455] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.464749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465020] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.465294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465519] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.465753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.465987] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.466347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.466678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.466693] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.466924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.467107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.467122] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.467476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.467695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.467711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.467929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.468161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.468176] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.468469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.468748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.468764] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.468964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.469182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.469197] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.469414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.469709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.469725] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.469846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.470145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.470160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.470388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.470695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.470711] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.471029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.471244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.471260] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.471481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.471661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.471676] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.471956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.472230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.472246] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.472539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.472757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.472774] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.473048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.473326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.473342] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.473626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.473858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.473874] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.474215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.474433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.474449] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.474676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.474857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.474873] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.475098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.475393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.475410] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.475621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.475832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.475850] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.475971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.476177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.476200] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.476475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.476767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.476784] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.476901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.477190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.477206] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.851 [2024-02-13 08:30:39.477424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.477707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.851 [2024-02-13 08:30:39.477723] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.851 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.477996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.478221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.478236] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.478446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.478733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.478749] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.478859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.479062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.479077] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb0b8000b90 with addr=10.0.0.2, port=4420 00:30:05.852 Malloc0 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.479339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.479576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.479596] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.852 [2024-02-13 08:30:39.479878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.480112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.480128] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 08:30:39 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.480355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.852 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:05.852 [2024-02-13 08:30:39.480718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.480735] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.480851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.481079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.481094] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.481319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.481499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.481514] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.481838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.482136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.482151] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.482364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.482638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.482659] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.482893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.483174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.483189] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.483402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.483682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.483697] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.483901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.484081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.484100] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.484460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.484687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.484703] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.484926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.485145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.485160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.485464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.485750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.485765] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.486078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.486416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.486431] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.486596] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.852 [2024-02-13 08:30:39.486719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.486942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.486957] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.852 qpair failed and we were unable to recover it. 00:30:05.852 [2024-02-13 08:30:39.487329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.852 [2024-02-13 08:30:39.487621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.487635] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.487927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.488265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.488280] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.488505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.488741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.488757] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.489032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.489154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.489168] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.489393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.489673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.489688] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.490034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.490340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.490354] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.490574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.490938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.490953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.491247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.491470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.491485] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.491734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.491966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.491981] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.853 [2024-02-13 08:30:39.492272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 08:30:39 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.853 [2024-02-13 08:30:39.492500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.492515] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.853 [2024-02-13 08:30:39.492798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:05.853 [2024-02-13 08:30:39.493094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.493109] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.493337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.493563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.493578] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.493863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.494146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.494160] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.494521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.494753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.494769] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.494966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.495267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.495281] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.495515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.495752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.495767] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.495958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.496240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.496255] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.496522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.496755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.496771] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.497188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.497403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.497418] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.497641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.497875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.497890] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.498195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.498439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.498454] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.498739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.498953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.498968] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.499168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.499505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.499520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 [2024-02-13 08:30:39.499762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.500045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.500061] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:05.853 [2024-02-13 08:30:39.500290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 08:30:39 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.853 [2024-02-13 08:30:39.500522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.500537] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.853 qpair failed and we were unable to recover it. 00:30:05.853 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:05.853 [2024-02-13 08:30:39.500824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:05.853 [2024-02-13 08:30:39.501037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.853 [2024-02-13 08:30:39.501053] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.501417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.501697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.501712] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.501982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.502212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.502227] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.502454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.502793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.502807] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.503014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.503349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.503363] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.503778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.504010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.504025] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.504302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.504580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.504595] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.504914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.505218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.505233] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.505516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.505794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.505813] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.506111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.506287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.506301] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.506575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.506788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.506803] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.507141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.507356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.507371] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:05.854 [2024-02-13 08:30:39.507586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.507868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.854 [2024-02-13 08:30:39.507883] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:05.854 qpair failed and we were unable to recover it. 00:30:06.115 [2024-02-13 08:30:39.508099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.115 [2024-02-13 08:30:39.508331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.508346] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 08:30:39 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.115 [2024-02-13 08:30:39.508642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.115 [2024-02-13 08:30:39.508937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.508953] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:06.115 [2024-02-13 08:30:39.509195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.509473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.509487] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-02-13 08:30:39.509711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.509945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.509961] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-02-13 08:30:39.510269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.510541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.510556] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-02-13 08:30:39.510779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.511059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.511074] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.115 qpair failed and we were unable to recover it. 00:30:06.115 [2024-02-13 08:30:39.511207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.115 [2024-02-13 08:30:39.511505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.116 [2024-02-13 08:30:39.511520] nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13da510 with addr=10.0.0.2, port=4420 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.511608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.116 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.116 08:30:39 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.116 08:30:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:06.116 08:30:39 -- common/autotest_common.sh@10 -- # set +x 00:30:06.116 [2024-02-13 08:30:39.517235] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.517384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.517410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.517422] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.517431] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.517458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 08:30:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:06.116 08:30:39 -- host/target_disconnect.sh@58 -- # wait 2445246 00:30:06.116 [2024-02-13 08:30:39.527195] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.527338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.527357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.527364] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.527370] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.527387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.537090] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.537190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.537208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.537214] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.537220] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.537236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.547151] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.547254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.547271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.547278] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.547284] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.547299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.557125] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.557226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.557242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.557249] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.557255] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.557270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.567152] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.567256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.567274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.567282] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.567288] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.567303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.577252] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.577351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.577368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.577375] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.577381] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.577396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.587173] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.587278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.587296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.587305] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.587311] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.587327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.597212] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.597319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.597336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.597343] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.597348] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.597364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.607311] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.607419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.607437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.607444] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.607450] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.607466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.116 [2024-02-13 08:30:39.617340] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.116 [2024-02-13 08:30:39.617440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.116 [2024-02-13 08:30:39.617456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.116 [2024-02-13 08:30:39.617463] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.116 [2024-02-13 08:30:39.617469] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.116 [2024-02-13 08:30:39.617485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.116 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.627371] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.627469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.627486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.627493] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.627498] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.627514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.637395] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.637495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.637512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.637519] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.637525] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.637540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.647389] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.647483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.647500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.647507] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.647513] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.647528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.657440] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.657535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.657552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.657558] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.657564] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.657579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.667475] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.667573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.667591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.667597] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.667603] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.667617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.677507] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.677604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.677626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.677633] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.677639] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.677662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.687523] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.687618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.687635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.687642] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.687656] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.687672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.697558] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.697659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.697677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.697684] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.697689] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.697705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.707593] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.707696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.707713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.707720] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.707726] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.707741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.717595] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.717699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.717716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.717722] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.717729] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.717744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.727627] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.727728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.727745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.727752] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.727757] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.727772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.737650] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.737749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.117 [2024-02-13 08:30:39.737767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.117 [2024-02-13 08:30:39.737773] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.117 [2024-02-13 08:30:39.737779] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.117 [2024-02-13 08:30:39.737794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.117 qpair failed and we were unable to recover it. 00:30:06.117 [2024-02-13 08:30:39.747629] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.117 [2024-02-13 08:30:39.747727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.747745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.747752] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.747758] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.747774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-02-13 08:30:39.757834] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.118 [2024-02-13 08:30:39.757941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.757957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.757964] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.757969] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.757985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-02-13 08:30:39.767808] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.118 [2024-02-13 08:30:39.767932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.767952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.767959] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.767964] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.767979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-02-13 08:30:39.777814] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.118 [2024-02-13 08:30:39.778035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.778053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.778059] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.778065] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.778080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-02-13 08:30:39.787800] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.118 [2024-02-13 08:30:39.787897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.787914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.787921] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.787927] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.787942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.118 [2024-02-13 08:30:39.797827] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.118 [2024-02-13 08:30:39.797926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.118 [2024-02-13 08:30:39.797943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.118 [2024-02-13 08:30:39.797949] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.118 [2024-02-13 08:30:39.797955] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.118 [2024-02-13 08:30:39.797970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.118 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.807872] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.807964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.807981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.807988] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.807993] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.808012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.817873] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.817980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.817997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.818003] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.818009] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.818024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.827869] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.827973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.827990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.827997] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.828002] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.828017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.837922] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.838023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.838040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.838046] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.838052] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.838067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.847873] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.847965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.847983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.847989] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.847995] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.848010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.858001] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.858094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.858114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.858121] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.858126] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.858142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.867956] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.868059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.868075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.868082] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.868088] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.868103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.379 qpair failed and we were unable to recover it. 00:30:06.379 [2024-02-13 08:30:39.878069] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.379 [2024-02-13 08:30:39.878160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.379 [2024-02-13 08:30:39.878177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.379 [2024-02-13 08:30:39.878184] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.379 [2024-02-13 08:30:39.878189] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.379 [2024-02-13 08:30:39.878204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.888078] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.888177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.888193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.888200] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.888205] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.888220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.898110] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.898207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.898224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.898230] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.898236] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.898254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.908150] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.908246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.908262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.908269] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.908274] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.908290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.918160] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.918256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.918273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.918279] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.918285] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.918300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.928195] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.928290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.928306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.928313] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.928318] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.928333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.938211] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.938307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.938324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.938331] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.938337] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.938351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.948262] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.948359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.948379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.948386] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.948392] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.948408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.958270] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.958410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.958427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.958434] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.958440] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.958455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.968301] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.968399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.968415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.968422] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.968427] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.968443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.978311] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.978404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.978420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.978426] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.978432] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.978448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.988365] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.988463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.988479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.988486] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.988491] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.988510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:39.998383] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:39.998481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:39.998498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:39.998505] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:39.998510] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.380 [2024-02-13 08:30:39.998525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.380 qpair failed and we were unable to recover it. 00:30:06.380 [2024-02-13 08:30:40.008361] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.380 [2024-02-13 08:30:40.008458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.380 [2024-02-13 08:30:40.008475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.380 [2024-02-13 08:30:40.008482] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.380 [2024-02-13 08:30:40.008488] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.008503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.381 [2024-02-13 08:30:40.018454] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.381 [2024-02-13 08:30:40.018560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.381 [2024-02-13 08:30:40.018579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.381 [2024-02-13 08:30:40.018587] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.381 [2024-02-13 08:30:40.018593] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.018610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.381 [2024-02-13 08:30:40.028495] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.381 [2024-02-13 08:30:40.028595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.381 [2024-02-13 08:30:40.028614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.381 [2024-02-13 08:30:40.028622] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.381 [2024-02-13 08:30:40.028627] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.028644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.381 [2024-02-13 08:30:40.038499] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.381 [2024-02-13 08:30:40.038593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.381 [2024-02-13 08:30:40.038614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.381 [2024-02-13 08:30:40.038621] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.381 [2024-02-13 08:30:40.038626] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.038642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.381 [2024-02-13 08:30:40.048541] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.381 [2024-02-13 08:30:40.048637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.381 [2024-02-13 08:30:40.048660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.381 [2024-02-13 08:30:40.048667] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.381 [2024-02-13 08:30:40.048673] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.048688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.381 [2024-02-13 08:30:40.058517] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.381 [2024-02-13 08:30:40.058662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.381 [2024-02-13 08:30:40.058681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.381 [2024-02-13 08:30:40.058689] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.381 [2024-02-13 08:30:40.058695] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.381 [2024-02-13 08:30:40.058711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.381 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.068623] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.068740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.068758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.068765] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.068771] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.068787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.642 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.078612] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.078712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.078729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.078736] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.078742] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.078761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.642 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.088568] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.088665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.088683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.088690] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.088696] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.088711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.642 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.098659] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.098756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.098773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.098780] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.098786] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.098801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.642 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.108717] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.108817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.108834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.108841] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.108847] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.108862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.642 qpair failed and we were unable to recover it. 00:30:06.642 [2024-02-13 08:30:40.118748] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.642 [2024-02-13 08:30:40.118853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.642 [2024-02-13 08:30:40.118871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.642 [2024-02-13 08:30:40.118878] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.642 [2024-02-13 08:30:40.118884] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.642 [2024-02-13 08:30:40.118899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.128774] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.128872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.128892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.128899] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.128904] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.128920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.138793] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.138886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.138902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.138909] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.138914] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.138930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.148870] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.148969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.148986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.148993] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.148998] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.149013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.158867] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.158968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.158985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.158992] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.158998] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.159013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.168932] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.169026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.169042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.169049] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.169058] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.169073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.178922] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.179021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.179037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.179044] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.179050] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.179065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.188961] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.189061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.189078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.189084] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.189090] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.189105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.198998] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.199122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.199140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.199146] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.199152] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.199167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.209021] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.209113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.209129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.209136] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.209141] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.209156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.219027] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.219126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.219142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.219149] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.219154] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.219169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.229074] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.229172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.229189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.229196] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.229201] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.229216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.239100] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.239231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.239248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.239255] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.239261] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.643 [2024-02-13 08:30:40.239276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.643 qpair failed and we were unable to recover it. 00:30:06.643 [2024-02-13 08:30:40.249130] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.643 [2024-02-13 08:30:40.249224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.643 [2024-02-13 08:30:40.249241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.643 [2024-02-13 08:30:40.249247] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.643 [2024-02-13 08:30:40.249253] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.249268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.259196] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.259301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.259317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.259324] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.259333] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.259349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.269208] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.269301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.269318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.269324] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.269330] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.269345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.279149] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.279291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.279307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.279314] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.279320] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.279335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.289239] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.289333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.289350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.289356] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.289362] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.289377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.299265] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.299355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.299372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.299378] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.299384] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.299399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.309293] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.309390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.309407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.309413] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.309419] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.309435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.644 [2024-02-13 08:30:40.319334] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.644 [2024-02-13 08:30:40.319438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.644 [2024-02-13 08:30:40.319454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.644 [2024-02-13 08:30:40.319460] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.644 [2024-02-13 08:30:40.319466] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.644 [2024-02-13 08:30:40.319481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.644 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.329355] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.329495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.329512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.329519] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.329524] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.329539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.339379] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.339469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.339485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.339492] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.339497] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.339512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.349425] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.349530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.349547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.349553] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.349562] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.349578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.359432] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.359530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.359546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.359553] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.359559] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.359573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.369469] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.369561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.369578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.369585] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.369590] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.369605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.379477] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.379572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.379588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.379595] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.379600] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.905 [2024-02-13 08:30:40.379616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.905 qpair failed and we were unable to recover it. 00:30:06.905 [2024-02-13 08:30:40.389531] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.905 [2024-02-13 08:30:40.389628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.905 [2024-02-13 08:30:40.389644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.905 [2024-02-13 08:30:40.389657] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.905 [2024-02-13 08:30:40.389662] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.389677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.399537] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.399637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.399659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.399666] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.399672] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.399687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.409554] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.409671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.409688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.409694] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.409700] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.409715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.419607] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.419712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.419728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.419735] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.419741] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.419756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.429653] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.429751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.429767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.429774] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.429779] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.429795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.439662] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.439766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.439783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.439790] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.439798] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.439813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.449698] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.449798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.449814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.449821] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.449827] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.449842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.459730] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.459822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.459839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.459846] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.459851] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.459867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.469773] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.469869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.469885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.469892] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.469898] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.469913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.479770] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.479870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.479887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.479894] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.479899] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13da510 00:30:06.906 [2024-02-13 08:30:40.479915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.489859] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.489999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.490030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.490043] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.490052] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.906 [2024-02-13 08:30:40.490079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.499851] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.499952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.499971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.499979] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.499985] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.906 [2024-02-13 08:30:40.500002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.509898] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.509995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.510012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.906 [2024-02-13 08:30:40.510019] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.906 [2024-02-13 08:30:40.510025] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.906 [2024-02-13 08:30:40.510042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.906 qpair failed and we were unable to recover it. 00:30:06.906 [2024-02-13 08:30:40.519908] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.906 [2024-02-13 08:30:40.520008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.906 [2024-02-13 08:30:40.520024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.520031] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.520037] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.520052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.529981] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.530089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.530106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.530116] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.530122] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.530138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.539940] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.540038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.540054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.540061] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.540067] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.540083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.549923] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.550022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.550039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.550046] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.550052] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.550068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.559977] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.560078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.560096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.560103] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.560109] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.560125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.570080] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.570194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.570212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.570219] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.570225] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.570241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.580077] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.580175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.580191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.580198] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.580203] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.580219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:06.907 [2024-02-13 08:30:40.590093] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:06.907 [2024-02-13 08:30:40.590192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:06.907 [2024-02-13 08:30:40.590208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:06.907 [2024-02-13 08:30:40.590215] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:06.907 [2024-02-13 08:30:40.590220] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:06.907 [2024-02-13 08:30:40.590236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:06.907 qpair failed and we were unable to recover it. 00:30:07.168 [2024-02-13 08:30:40.600112] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.168 [2024-02-13 08:30:40.600202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.168 [2024-02-13 08:30:40.600219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.168 [2024-02-13 08:30:40.600226] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.168 [2024-02-13 08:30:40.600231] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.168 [2024-02-13 08:30:40.600247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.168 qpair failed and we were unable to recover it. 00:30:07.168 [2024-02-13 08:30:40.610087] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.168 [2024-02-13 08:30:40.610175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.168 [2024-02-13 08:30:40.610193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.168 [2024-02-13 08:30:40.610200] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.168 [2024-02-13 08:30:40.610206] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.168 [2024-02-13 08:30:40.610223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.168 qpair failed and we were unable to recover it. 00:30:07.168 [2024-02-13 08:30:40.620120] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.168 [2024-02-13 08:30:40.620228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.168 [2024-02-13 08:30:40.620244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.168 [2024-02-13 08:30:40.620254] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.168 [2024-02-13 08:30:40.620260] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.620276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.630196] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.630295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.630311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.630318] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.630323] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.630338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.640209] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.640346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.640363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.640370] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.640375] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.640390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.650264] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.650357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.650373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.650380] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.650386] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.650401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.660251] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.660349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.660366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.660372] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.660378] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.660394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.670285] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.670397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.670413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.670420] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.670426] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.670441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.680387] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.680485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.680501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.680509] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.680515] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.680530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.690420] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.690513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.690529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.690536] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.690542] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.690557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.700441] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.700533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.700552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.700559] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.700565] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.700581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.710468] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.710566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.710588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.710595] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.710601] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.710616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.720438] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.720536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.720553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.720560] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.720566] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.720581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.730527] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.730619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.730636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.730643] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.730653] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.730669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.740534] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.740629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.740645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.740657] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.169 [2024-02-13 08:30:40.740663] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.169 [2024-02-13 08:30:40.740679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.169 qpair failed and we were unable to recover it. 00:30:07.169 [2024-02-13 08:30:40.750555] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.169 [2024-02-13 08:30:40.750658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.169 [2024-02-13 08:30:40.750676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.169 [2024-02-13 08:30:40.750682] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.750688] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.750707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.760592] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.760692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.760709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.760716] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.760721] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.760737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.770653] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.770759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.770775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.770782] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.770788] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.770803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.780645] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.780744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.780760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.780767] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.780773] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.780789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.790642] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.790748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.790765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.790772] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.790778] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.790793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.800640] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.800738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.800757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.800764] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.800769] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.800785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.810738] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.810839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.810856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.810863] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.810868] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.810884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.820814] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.820912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.820928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.820935] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.820941] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.820956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.830744] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.830840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.830856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.830863] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.830868] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.830883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.840771] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.840878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.840894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.840901] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.840907] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.840926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.170 [2024-02-13 08:30:40.850883] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.170 [2024-02-13 08:30:40.850975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.170 [2024-02-13 08:30:40.850992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.170 [2024-02-13 08:30:40.850999] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.170 [2024-02-13 08:30:40.851005] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.170 [2024-02-13 08:30:40.851021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.170 qpair failed and we were unable to recover it. 00:30:07.431 [2024-02-13 08:30:40.860839] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.431 [2024-02-13 08:30:40.860939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.431 [2024-02-13 08:30:40.860956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.431 [2024-02-13 08:30:40.860962] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.431 [2024-02-13 08:30:40.860968] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.431 [2024-02-13 08:30:40.860983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.431 qpair failed and we were unable to recover it. 00:30:07.431 [2024-02-13 08:30:40.870940] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.431 [2024-02-13 08:30:40.871035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.431 [2024-02-13 08:30:40.871052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.431 [2024-02-13 08:30:40.871058] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.431 [2024-02-13 08:30:40.871064] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.431 [2024-02-13 08:30:40.871079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.431 qpair failed and we were unable to recover it. 00:30:07.431 [2024-02-13 08:30:40.880978] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.431 [2024-02-13 08:30:40.881079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.431 [2024-02-13 08:30:40.881096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.431 [2024-02-13 08:30:40.881103] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.431 [2024-02-13 08:30:40.881109] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.881125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.890980] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.891080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.891096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.891103] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.891108] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.891123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.900957] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.901063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.901080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.901086] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.901092] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.901107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.911044] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.911151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.911167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.911174] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.911180] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.911195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.921025] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.921125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.921141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.921148] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.921154] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.921169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.931043] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.931141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.931157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.931163] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.931172] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.931188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.941080] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.941171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.941187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.941194] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.941200] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.941215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.951219] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.951325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.951341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.951348] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.951353] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.951368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.961191] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.961282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.961299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.961305] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.961311] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.961326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.971208] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.971303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.971320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.971326] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.971332] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.971346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.981257] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.981358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.981374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.981380] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.981386] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.981401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:40.991297] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:40.991392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:40.991408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:40.991414] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:40.991420] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:40.991435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:41.001308] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:41.001402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:41.001418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:41.001425] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.432 [2024-02-13 08:30:41.001430] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.432 [2024-02-13 08:30:41.001446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.432 qpair failed and we were unable to recover it. 00:30:07.432 [2024-02-13 08:30:41.011276] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.432 [2024-02-13 08:30:41.011372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.432 [2024-02-13 08:30:41.011388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.432 [2024-02-13 08:30:41.011395] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.011400] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.011415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.021371] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.021470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.021486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.021495] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.021501] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.021517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.031449] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.031553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.031569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.031576] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.031582] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.031597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.041451] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.041545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.041562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.041568] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.041574] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.041589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.051470] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.051559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.051575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.051582] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.051588] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.051604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.061416] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.061510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.061527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.061533] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.061539] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.061555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.071533] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.071631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.071653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.071660] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.071666] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.071681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.081563] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.081665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.081682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.081688] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.081694] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.081709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.091573] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.091675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.091692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.091698] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.091704] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.091719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.101524] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.101618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.101635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.101641] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.101664] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.101680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.433 [2024-02-13 08:30:41.111644] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.433 [2024-02-13 08:30:41.111759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.433 [2024-02-13 08:30:41.111776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.433 [2024-02-13 08:30:41.111785] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.433 [2024-02-13 08:30:41.111791] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.433 [2024-02-13 08:30:41.111807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.433 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.121667] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.121766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.121782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.121789] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.121794] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.121810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.131697] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.131799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.131815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.131822] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.131828] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.131844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.141715] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.141810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.141827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.141834] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.141839] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.141855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.151871] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.151968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.151984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.151991] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.151997] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.152013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.161771] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.161871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.161887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.161894] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.161900] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.161915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.171776] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.171867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.171883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.171890] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.171896] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.171911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.181844] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.181941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.181957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.181963] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.181969] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.181984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.191876] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.191972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.191988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.191995] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.192001] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.192016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.201909] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.202112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.202132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.202139] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.202145] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.202160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.694 [2024-02-13 08:30:41.211829] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.694 [2024-02-13 08:30:41.212030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.694 [2024-02-13 08:30:41.212047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.694 [2024-02-13 08:30:41.212053] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.694 [2024-02-13 08:30:41.212059] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.694 [2024-02-13 08:30:41.212073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.694 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.221934] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.222062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.222078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.222084] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.222090] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.222106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.231973] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.232072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.232088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.232095] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.232101] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.232115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.241988] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.242089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.242105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.242112] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.242117] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.242136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.252020] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.252119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.252136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.252142] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.252148] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.252163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.262047] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.262135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.262151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.262158] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.262164] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.262179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.272057] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.272151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.272167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.272174] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.272180] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.272195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.282114] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.282208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.282224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.282231] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.282237] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.282252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.292087] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.292218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.292237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.292244] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.292249] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.292265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.302176] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.302270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.302285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.302292] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.302298] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.302313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.312206] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.312299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.312315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.312322] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.312328] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.312344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.322149] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.322251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.322267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.322273] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.322279] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.322295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.332247] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.332335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.332351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.332358] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.332363] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.695 [2024-02-13 08:30:41.332382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.695 qpair failed and we were unable to recover it. 00:30:07.695 [2024-02-13 08:30:41.342279] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.695 [2024-02-13 08:30:41.342370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.695 [2024-02-13 08:30:41.342386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.695 [2024-02-13 08:30:41.342392] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.695 [2024-02-13 08:30:41.342398] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.696 [2024-02-13 08:30:41.342413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.696 qpair failed and we were unable to recover it. 00:30:07.696 [2024-02-13 08:30:41.352321] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.696 [2024-02-13 08:30:41.352420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.696 [2024-02-13 08:30:41.352435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.696 [2024-02-13 08:30:41.352442] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.696 [2024-02-13 08:30:41.352447] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.696 [2024-02-13 08:30:41.352463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.696 qpair failed and we were unable to recover it. 00:30:07.696 [2024-02-13 08:30:41.362254] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.696 [2024-02-13 08:30:41.362354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.696 [2024-02-13 08:30:41.362369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.696 [2024-02-13 08:30:41.362376] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.696 [2024-02-13 08:30:41.362382] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.696 [2024-02-13 08:30:41.362397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.696 qpair failed and we were unable to recover it. 00:30:07.696 [2024-02-13 08:30:41.372316] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.696 [2024-02-13 08:30:41.372418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.696 [2024-02-13 08:30:41.372434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.696 [2024-02-13 08:30:41.372440] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.696 [2024-02-13 08:30:41.372445] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.696 [2024-02-13 08:30:41.372460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.696 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.382395] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.382485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.382504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.382511] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.382517] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.382532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.392429] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.392523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.392540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.392546] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.392552] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.392567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.402456] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.402552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.402568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.402575] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.402580] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.402595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.412476] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.412572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.412588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.412595] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.412601] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.412616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.422509] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.422731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.422748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.422754] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.422763] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.422780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.432540] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.432634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.432655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.432662] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.432668] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.432684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.442581] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.442716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.442732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.442739] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.442745] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.956 [2024-02-13 08:30:41.442760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.956 qpair failed and we were unable to recover it. 00:30:07.956 [2024-02-13 08:30:41.452590] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.956 [2024-02-13 08:30:41.452682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.956 [2024-02-13 08:30:41.452699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.956 [2024-02-13 08:30:41.452705] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.956 [2024-02-13 08:30:41.452711] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.452726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.462614] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.462708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.462725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.462731] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.462737] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.462752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.472635] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.472742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.472758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.472765] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.472770] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.472786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.482682] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.482775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.482792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.482799] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.482804] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.482820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.492711] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.492800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.492816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.492823] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.492829] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.492844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.502671] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.502773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.502790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.502796] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.502802] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.502817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.512690] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.512793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.512808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.512815] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.512825] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.512840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.522728] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.522854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.522870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.522877] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.522882] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.522898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.532808] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.532901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.532917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.532924] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.532930] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.532945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.542902] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.543015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.543032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.543039] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.543045] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.543061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.552920] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.553016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.553033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.553040] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.553046] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.553061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.562904] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.562997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.563014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.563020] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.563026] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.563042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.572873] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.572975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.572992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.572999] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.957 [2024-02-13 08:30:41.573005] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.957 [2024-02-13 08:30:41.573020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.957 qpair failed and we were unable to recover it. 00:30:07.957 [2024-02-13 08:30:41.582882] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.957 [2024-02-13 08:30:41.582987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.957 [2024-02-13 08:30:41.583004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.957 [2024-02-13 08:30:41.583010] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.583016] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.583031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:07.958 [2024-02-13 08:30:41.593014] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.958 [2024-02-13 08:30:41.593111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.958 [2024-02-13 08:30:41.593127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.958 [2024-02-13 08:30:41.593134] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.593139] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.593154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:07.958 [2024-02-13 08:30:41.603010] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.958 [2024-02-13 08:30:41.603110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.958 [2024-02-13 08:30:41.603126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.958 [2024-02-13 08:30:41.603136] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.603142] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.603157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:07.958 [2024-02-13 08:30:41.613052] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.958 [2024-02-13 08:30:41.613178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.958 [2024-02-13 08:30:41.613195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.958 [2024-02-13 08:30:41.613202] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.613208] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.613223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:07.958 [2024-02-13 08:30:41.622995] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.958 [2024-02-13 08:30:41.623086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.958 [2024-02-13 08:30:41.623101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.958 [2024-02-13 08:30:41.623107] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.623113] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.623129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:07.958 [2024-02-13 08:30:41.633029] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:07.958 [2024-02-13 08:30:41.633124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:07.958 [2024-02-13 08:30:41.633139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:07.958 [2024-02-13 08:30:41.633146] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:07.958 [2024-02-13 08:30:41.633152] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:07.958 [2024-02-13 08:30:41.633167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.958 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.643144] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.643236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.643251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.643258] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.643263] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.643278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.653163] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.653265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.653281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.653288] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.653293] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.653309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.663201] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.663292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.663308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.663315] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.663321] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.663336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.673247] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.673347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.673363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.673370] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.673376] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.673392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.683248] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.683343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.683359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.683366] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.683372] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.683387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.693241] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.693332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.693352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.693358] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.693364] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.693379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.703312] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.703405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.703421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.703428] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.703433] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.703448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.713344] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.713441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.713457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.713464] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.713469] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.713484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.723357] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.723451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.723466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.723473] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.723478] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.723493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.733414] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.733504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.733521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.733527] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.733533] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.733547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.743433] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.743529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.743546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.743552] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.743558] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.743574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.753469] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.219 [2024-02-13 08:30:41.753566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.219 [2024-02-13 08:30:41.753582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.219 [2024-02-13 08:30:41.753589] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.219 [2024-02-13 08:30:41.753595] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.219 [2024-02-13 08:30:41.753611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.219 qpair failed and we were unable to recover it. 00:30:08.219 [2024-02-13 08:30:41.763582] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.763690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.763707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.763714] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.763719] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.763735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.773567] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.773674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.773690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.773697] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.773702] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.773718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.783607] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.783710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.783730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.783736] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.783742] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.783757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.793623] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.793724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.793740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.793747] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.793753] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.793769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.803606] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.803704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.803720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.803726] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.803732] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.803747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.813663] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.813756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.813772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.813779] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.813785] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.813801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.823590] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.823691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.823707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.823714] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.823719] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.823738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.833699] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.833792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.833808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.833815] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.833820] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.833835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.843718] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.843817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.843832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.843839] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.843844] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.843860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.853767] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.853873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.853889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.853896] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.853901] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.853917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.863754] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.863878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.863894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.863901] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.863907] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.863922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.873817] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.873911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.873930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.873937] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.873942] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.873957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.220 [2024-02-13 08:30:41.883851] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.220 [2024-02-13 08:30:41.883972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.220 [2024-02-13 08:30:41.883988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.220 [2024-02-13 08:30:41.883994] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.220 [2024-02-13 08:30:41.884000] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.220 [2024-02-13 08:30:41.884015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.220 qpair failed and we were unable to recover it. 00:30:08.221 [2024-02-13 08:30:41.893859] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.221 [2024-02-13 08:30:41.893951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.221 [2024-02-13 08:30:41.893968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.221 [2024-02-13 08:30:41.893974] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.221 [2024-02-13 08:30:41.893980] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.221 [2024-02-13 08:30:41.893995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.221 qpair failed and we were unable to recover it. 00:30:08.221 [2024-02-13 08:30:41.903872] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.221 [2024-02-13 08:30:41.903968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.221 [2024-02-13 08:30:41.903985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.221 [2024-02-13 08:30:41.903991] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.221 [2024-02-13 08:30:41.903996] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.221 [2024-02-13 08:30:41.904011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.221 qpair failed and we were unable to recover it. 00:30:08.481 [2024-02-13 08:30:41.913943] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.481 [2024-02-13 08:30:41.914040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.481 [2024-02-13 08:30:41.914056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.481 [2024-02-13 08:30:41.914062] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.481 [2024-02-13 08:30:41.914071] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.481 [2024-02-13 08:30:41.914086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-02-13 08:30:41.923940] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.481 [2024-02-13 08:30:41.924033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.481 [2024-02-13 08:30:41.924049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.481 [2024-02-13 08:30:41.924055] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.481 [2024-02-13 08:30:41.924061] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.481 [2024-02-13 08:30:41.924076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-02-13 08:30:41.933979] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.481 [2024-02-13 08:30:41.934074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.481 [2024-02-13 08:30:41.934090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.481 [2024-02-13 08:30:41.934097] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.481 [2024-02-13 08:30:41.934102] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.934117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.944071] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.944164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.944180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.944186] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.944192] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.944208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.954000] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.954104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.954120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.954127] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.954132] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.954147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.964055] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.964154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.964171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.964177] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.964183] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.964198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.974072] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.974174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.974190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.974197] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.974202] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.974217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.984096] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.984197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.984213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.984220] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.984226] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.984242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:41.994121] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:41.994224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:41.994240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:41.994246] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:41.994252] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:41.994267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.004135] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.004233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.004249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.004255] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.004264] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.004280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.014160] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.014254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.014270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.014277] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.014283] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.014298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.024205] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.024298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.024314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.024320] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.024326] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.024342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.034274] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.034371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.034387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.034394] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.034400] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.034415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.044313] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.044416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.044432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.044439] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.044445] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.044461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.054293] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.054389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.054406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.482 [2024-02-13 08:30:42.054413] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.482 [2024-02-13 08:30:42.054418] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.482 [2024-02-13 08:30:42.054434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-02-13 08:30:42.064332] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.482 [2024-02-13 08:30:42.064430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.482 [2024-02-13 08:30:42.064446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.064453] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.064459] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.064475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.074429] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.074532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.074548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.074554] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.074560] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.074575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.084424] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.084516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.084532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.084539] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.084545] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.084560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.094460] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.094555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.094571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.094581] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.094587] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.094602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.104485] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.104615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.104631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.104638] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.104644] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.104665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.114509] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.114606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.114623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.114629] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.114635] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.114655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.124492] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.124596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.124612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.124618] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.124624] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.124638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.134555] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.134653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.134670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.134677] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.134682] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.134698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.144600] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.144701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.144718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.144724] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.144730] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.144746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.154629] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.154728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.154745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.154752] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.154758] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.154774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-02-13 08:30:42.164633] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.483 [2024-02-13 08:30:42.164736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.483 [2024-02-13 08:30:42.164752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.483 [2024-02-13 08:30:42.164759] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.483 [2024-02-13 08:30:42.164764] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.483 [2024-02-13 08:30:42.164780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.742 [2024-02-13 08:30:42.174668] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.742 [2024-02-13 08:30:42.174765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.742 [2024-02-13 08:30:42.174781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.742 [2024-02-13 08:30:42.174788] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.742 [2024-02-13 08:30:42.174793] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.742 [2024-02-13 08:30:42.174808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.742 qpair failed and we were unable to recover it. 00:30:08.742 [2024-02-13 08:30:42.184656] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.742 [2024-02-13 08:30:42.184748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.184764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.184773] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.184779] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.184794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.194748] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.194847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.194863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.194870] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.194876] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.194891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.204802] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.204905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.204922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.204929] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.204934] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.204949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.214807] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.214912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.214928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.214935] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.214940] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.214955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.224745] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.224836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.224853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.224860] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.224866] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.224881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.234786] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.234884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.234900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.234907] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.234912] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.234927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.244799] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.244895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.244910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.244917] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.244923] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.244937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.254895] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.254985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.255000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.255007] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.255013] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.255028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.264856] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.264950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.264966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.264972] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.264978] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.264993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.274898] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.274993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.275013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.275020] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.275025] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.275041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.284995] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.285094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.285110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.285116] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.285121] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.285137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.294960] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.295056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.295073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.295079] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.295085] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.295099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.743 qpair failed and we were unable to recover it. 00:30:08.743 [2024-02-13 08:30:42.305027] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.743 [2024-02-13 08:30:42.305162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.743 [2024-02-13 08:30:42.305178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.743 [2024-02-13 08:30:42.305184] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.743 [2024-02-13 08:30:42.305190] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.743 [2024-02-13 08:30:42.305205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.315059] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.315154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.315170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.315176] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.315182] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.315200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.325060] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.325265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.325282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.325288] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.325294] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.325311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.335103] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.335196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.335212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.335219] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.335224] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.335239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.345129] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.345228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.345244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.345251] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.345257] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.345273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.355137] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.355234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.355251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.355258] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.355264] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.355279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.365210] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.365329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.365348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.365355] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.365360] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.365376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.375209] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.375324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.375340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.375347] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.375352] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.375368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.385262] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.385360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.385377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.385383] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.385389] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.385404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.395299] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.395395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.395411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.395417] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.395423] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.395438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.405259] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.405365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.405380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.405387] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.405393] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.405411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.415368] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.415468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.415484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.415491] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.415496] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.415512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:08.744 [2024-02-13 08:30:42.425390] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:08.744 [2024-02-13 08:30:42.425480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:08.744 [2024-02-13 08:30:42.425496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:08.744 [2024-02-13 08:30:42.425502] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:08.744 [2024-02-13 08:30:42.425508] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:08.744 [2024-02-13 08:30:42.425523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.744 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.435416] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.435512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.435528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.435534] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.435540] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.435555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.445474] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.445584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.445600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.445606] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.445612] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.445628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.455479] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.455598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.455614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.455620] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.455626] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.455641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.465496] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.465593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.465609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.465615] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.465621] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.465636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.475542] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.475639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.475660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.475667] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.475672] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.475688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.485584] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.485703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.485719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.485725] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.485731] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.485746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.495570] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.495708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.495725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.495731] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.495740] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.495756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.505628] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.505727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.505742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.505749] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.505755] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.505770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.515655] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.515750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.515767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.515773] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.005 [2024-02-13 08:30:42.515779] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.005 [2024-02-13 08:30:42.515794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-02-13 08:30:42.525672] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.005 [2024-02-13 08:30:42.525776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.005 [2024-02-13 08:30:42.525791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.005 [2024-02-13 08:30:42.525798] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.525803] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.525819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.535691] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.535784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.535800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.535807] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.535812] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.535827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.545725] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.545820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.545836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.545843] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.545849] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.545863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.555762] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.555861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.555877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.555884] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.555889] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.555904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.565787] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.565882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.565899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.565905] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.565911] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.565927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.575827] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.575937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.575955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.575963] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.575969] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.575985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.585977] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.586073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.586090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.586099] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.586105] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.586121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.595912] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.596026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.596043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.596050] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.596055] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.596072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.605899] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.606033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.606049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.606055] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.606061] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.606077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.615976] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.616071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.616086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.616093] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.616099] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.616114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.625932] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.626073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.626090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.626097] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.626102] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.626117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.636000] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.636095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.636111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.636118] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.636124] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.636139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.646006] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.006 [2024-02-13 08:30:42.646102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.006 [2024-02-13 08:30:42.646118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.006 [2024-02-13 08:30:42.646125] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.006 [2024-02-13 08:30:42.646130] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.006 [2024-02-13 08:30:42.646146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-02-13 08:30:42.656039] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.007 [2024-02-13 08:30:42.656132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.007 [2024-02-13 08:30:42.656148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.007 [2024-02-13 08:30:42.656154] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.007 [2024-02-13 08:30:42.656160] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.007 [2024-02-13 08:30:42.656175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-13 08:30:42.666067] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.007 [2024-02-13 08:30:42.666164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.007 [2024-02-13 08:30:42.666180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.007 [2024-02-13 08:30:42.666187] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.007 [2024-02-13 08:30:42.666193] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.007 [2024-02-13 08:30:42.666208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-13 08:30:42.676112] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.007 [2024-02-13 08:30:42.676206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.007 [2024-02-13 08:30:42.676222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.007 [2024-02-13 08:30:42.676232] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.007 [2024-02-13 08:30:42.676238] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.007 [2024-02-13 08:30:42.676254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-02-13 08:30:42.686106] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.007 [2024-02-13 08:30:42.686204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.007 [2024-02-13 08:30:42.686220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.007 [2024-02-13 08:30:42.686226] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.007 [2024-02-13 08:30:42.686232] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.007 [2024-02-13 08:30:42.686247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.696153] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.696248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.696265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.696272] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.696278] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.696293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.706176] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.706315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.706331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.706338] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.706343] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.706358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.716231] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.716324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.716341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.716347] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.716352] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.716368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.726186] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.726285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.726301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.726307] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.726313] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.726329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.736271] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.736371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.736387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.736394] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.736400] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.736415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.746304] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.746399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.746415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.746422] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.746428] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.746443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.756334] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.756430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.756446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.756453] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.756459] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.756473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.766357] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.766450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.766469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.766476] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.766481] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.766496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.776389] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.776486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.776503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.776510] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.776515] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.776531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.786429] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.786522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.786539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.786546] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.786551] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.268 [2024-02-13 08:30:42.786567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.268 qpair failed and we were unable to recover it. 00:30:09.268 [2024-02-13 08:30:42.796471] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.268 [2024-02-13 08:30:42.796574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.268 [2024-02-13 08:30:42.796590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.268 [2024-02-13 08:30:42.796597] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.268 [2024-02-13 08:30:42.796602] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.796617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.806481] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.806578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.806594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.806601] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.806606] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.806625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.816516] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.816612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.816629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.816635] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.816641] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.816661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.826552] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.826651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.826667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.826674] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.826679] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.826695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.836598] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.836697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.836713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.836720] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.836725] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.836741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.846616] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.846712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.846728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.846735] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.846741] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.846756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.856637] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.856735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.856755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.856762] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.856767] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.856783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.866661] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.866754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.866771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.866777] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.866783] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.866798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.876702] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.876798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.876813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.876820] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.876826] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.876841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.886727] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.886823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.886839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.886846] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.886851] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.886866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.896790] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.896884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.896900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.896907] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.896912] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.896932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.906786] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.906884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.906900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.906906] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.906912] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.906927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.916843] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.916990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.917006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.917013] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.269 [2024-02-13 08:30:42.917018] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.269 [2024-02-13 08:30:42.917033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.269 qpair failed and we were unable to recover it. 00:30:09.269 [2024-02-13 08:30:42.926761] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.269 [2024-02-13 08:30:42.926866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.269 [2024-02-13 08:30:42.926883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.269 [2024-02-13 08:30:42.926889] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.270 [2024-02-13 08:30:42.926895] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.270 [2024-02-13 08:30:42.926910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.270 qpair failed and we were unable to recover it. 00:30:09.270 [2024-02-13 08:30:42.936878] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.270 [2024-02-13 08:30:42.936992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.270 [2024-02-13 08:30:42.937008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.270 [2024-02-13 08:30:42.937014] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.270 [2024-02-13 08:30:42.937020] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.270 [2024-02-13 08:30:42.937035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.270 qpair failed and we were unable to recover it. 00:30:09.270 [2024-02-13 08:30:42.946904] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.270 [2024-02-13 08:30:42.946996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.270 [2024-02-13 08:30:42.947016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.270 [2024-02-13 08:30:42.947023] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.270 [2024-02-13 08:30:42.947029] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.270 [2024-02-13 08:30:42.947045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.270 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:42.956958] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:42.957057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:42.957073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:42.957080] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:42.957086] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:42.957101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:42.966946] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:42.967048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:42.967064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:42.967071] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:42.967077] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:42.967093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:42.977012] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:42.977102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:42.977118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:42.977125] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:42.977131] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:42.977145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:42.986992] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:42.987095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:42.987111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:42.987117] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:42.987126] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:42.987141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:42.997052] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:42.997149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:42.997165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:42.997172] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:42.997177] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:42.997192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.006999] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.007107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.007123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.007130] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.007135] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.007150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.017108] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.017198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.017214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.017220] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.017226] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.017242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.027163] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.027261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.027277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.027284] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.027289] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.027305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.037167] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.037263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.037279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.037285] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.037291] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.037306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.047113] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.047211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.047227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.047234] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.047240] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.047255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.057212] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.057300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.057317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.057323] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.057329] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.057344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.067249] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.067346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.067363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.067369] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.531 [2024-02-13 08:30:43.067375] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.531 [2024-02-13 08:30:43.067390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.531 qpair failed and we were unable to recover it. 00:30:09.531 [2024-02-13 08:30:43.077301] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.531 [2024-02-13 08:30:43.077398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.531 [2024-02-13 08:30:43.077414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.531 [2024-02-13 08:30:43.077421] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.077430] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.077445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.087316] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.087413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.087429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.087436] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.087442] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.087457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.097335] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.097430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.097446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.097453] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.097459] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.097474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.107471] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.107564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.107580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.107586] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.107592] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.107607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.117325] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.117428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.117444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.117450] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.117456] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.117471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.127421] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.127524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.127540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.127547] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.127552] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.127567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.137450] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.137546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.137562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.137569] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.137575] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.137590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.147475] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.147563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.147579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.147586] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.147591] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.147607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.157548] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.157662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.157678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.157685] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.157691] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.157706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.167446] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.167542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.167558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.167567] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.167573] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.167588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.177574] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.177685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.177701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.177708] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.177713] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.177729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.187592] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.187688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.187704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.187711] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.187716] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.187731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.197642] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.197740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.532 [2024-02-13 08:30:43.197756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.532 [2024-02-13 08:30:43.197762] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.532 [2024-02-13 08:30:43.197768] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.532 [2024-02-13 08:30:43.197783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.532 qpair failed and we were unable to recover it. 00:30:09.532 [2024-02-13 08:30:43.207673] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.532 [2024-02-13 08:30:43.207766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.533 [2024-02-13 08:30:43.207782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.533 [2024-02-13 08:30:43.207788] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.533 [2024-02-13 08:30:43.207794] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.533 [2024-02-13 08:30:43.207809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.533 qpair failed and we were unable to recover it. 00:30:09.793 [2024-02-13 08:30:43.217734] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.793 [2024-02-13 08:30:43.217831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.793 [2024-02-13 08:30:43.217849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.793 [2024-02-13 08:30:43.217856] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.793 [2024-02-13 08:30:43.217861] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.793 [2024-02-13 08:30:43.217877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.793 qpair failed and we were unable to recover it. 00:30:09.793 [2024-02-13 08:30:43.227734] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.793 [2024-02-13 08:30:43.227841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.793 [2024-02-13 08:30:43.227857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.793 [2024-02-13 08:30:43.227864] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.793 [2024-02-13 08:30:43.227869] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.793 [2024-02-13 08:30:43.227885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.793 qpair failed and we were unable to recover it. 00:30:09.793 [2024-02-13 08:30:43.237745] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.793 [2024-02-13 08:30:43.237839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.793 [2024-02-13 08:30:43.237855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.793 [2024-02-13 08:30:43.237862] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.793 [2024-02-13 08:30:43.237867] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.793 [2024-02-13 08:30:43.237882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.793 qpair failed and we were unable to recover it. 00:30:09.793 [2024-02-13 08:30:43.247769] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.793 [2024-02-13 08:30:43.247865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.793 [2024-02-13 08:30:43.247881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.793 [2024-02-13 08:30:43.247888] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.793 [2024-02-13 08:30:43.247893] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.793 [2024-02-13 08:30:43.247909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.793 qpair failed and we were unable to recover it. 00:30:09.793 [2024-02-13 08:30:43.257756] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.793 [2024-02-13 08:30:43.257898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.793 [2024-02-13 08:30:43.257918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.793 [2024-02-13 08:30:43.257925] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.793 [2024-02-13 08:30:43.257930] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.793 [2024-02-13 08:30:43.257946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.267767] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.267865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.267880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.267887] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.267892] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.267907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.277899] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.278001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.278017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.278023] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.278029] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.278045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.287882] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.287973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.287989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.287996] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.288001] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.288016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.297918] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.298020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.298032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.298038] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.298044] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.298058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.307864] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.307959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.307975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.307982] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.307987] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.308003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.317986] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.318083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.318100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.318106] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.318112] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.318127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.327992] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.328132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.328148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.328154] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.328160] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.328175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.338024] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.338118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.338134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.338141] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.338147] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.338162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.348074] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.348191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.348211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.348218] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.348223] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.348239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.358094] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.358188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.358204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.358211] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.358216] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.358231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.368099] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.368194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.368210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.368216] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.368222] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.368238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.378134] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.378271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.378287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.378294] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.378299] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.794 [2024-02-13 08:30:43.378315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.794 qpair failed and we were unable to recover it. 00:30:09.794 [2024-02-13 08:30:43.388191] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.794 [2024-02-13 08:30:43.388309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.794 [2024-02-13 08:30:43.388325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.794 [2024-02-13 08:30:43.388332] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.794 [2024-02-13 08:30:43.388337] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.388356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.398218] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.398316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.398333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.398339] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.398345] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.398360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.408171] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.408276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.408291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.408298] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.408304] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.408319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.418251] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.418399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.418415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.418421] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.418427] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.418442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.428271] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.428400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.428416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.428423] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.428428] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.428444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.438331] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.438431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.438451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.438458] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.438463] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.438479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.448319] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.448418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.448434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.448441] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.448446] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.448462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.458343] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.458438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.458454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.458461] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.458466] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.458481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.468409] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.468548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.468564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.468570] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.468576] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.468590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:09.795 [2024-02-13 08:30:43.478358] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:09.795 [2024-02-13 08:30:43.478456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:09.795 [2024-02-13 08:30:43.478472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:09.795 [2024-02-13 08:30:43.478478] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:09.795 [2024-02-13 08:30:43.478487] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:09.795 [2024-02-13 08:30:43.478502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.795 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.488406] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.488533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.488549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.488555] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.488561] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.488577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.498449] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.498549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.498565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.498571] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.498577] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.498592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.508525] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.508620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.508635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.508642] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.508653] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.508669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.518582] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.518692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.518716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.518723] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.518729] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.518746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.528553] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.528663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.528680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.528686] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.528692] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.528707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.538600] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.538702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.538719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.538725] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.538731] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.538746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.548557] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.548704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.548721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.548727] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.548733] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.548748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.558679] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.558774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.558790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.558797] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.558802] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.056 [2024-02-13 08:30:43.558818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.056 qpair failed and we were unable to recover it. 00:30:10.056 [2024-02-13 08:30:43.568612] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.056 [2024-02-13 08:30:43.568713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.056 [2024-02-13 08:30:43.568729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.056 [2024-02-13 08:30:43.568736] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.056 [2024-02-13 08:30:43.568745] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.568761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.578683] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.578781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.578798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.578805] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.578811] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.578827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.588682] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.588784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.588800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.588807] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.588812] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.588828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.598709] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.598812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.598828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.598836] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.598841] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.598857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.608796] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.608893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.608909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.608916] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.608922] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.608938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.618827] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.618922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.618940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.618947] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.618953] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.618968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.628793] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.628883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.628899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.628905] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.628911] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.628926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.638913] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.639008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.639024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.639030] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.639036] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.639051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.648919] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.649058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.649074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.649080] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.649086] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.649101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.658961] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.659063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.659080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.659091] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.659097] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.659112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.668956] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.669051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.669067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.669074] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.669080] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.669095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.679008] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.679103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.679119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.679126] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.679131] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.679146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.689008] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.689119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.689135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.689142] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.689148] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.057 [2024-02-13 08:30:43.689163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.057 qpair failed and we were unable to recover it. 00:30:10.057 [2024-02-13 08:30:43.699074] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.057 [2024-02-13 08:30:43.699168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.057 [2024-02-13 08:30:43.699184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.057 [2024-02-13 08:30:43.699191] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.057 [2024-02-13 08:30:43.699196] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.058 [2024-02-13 08:30:43.699211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.058 qpair failed and we were unable to recover it. 00:30:10.058 [2024-02-13 08:30:43.709086] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.058 [2024-02-13 08:30:43.709174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.058 [2024-02-13 08:30:43.709190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.058 [2024-02-13 08:30:43.709197] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.058 [2024-02-13 08:30:43.709202] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.058 [2024-02-13 08:30:43.709217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.058 qpair failed and we were unable to recover it. 00:30:10.058 [2024-02-13 08:30:43.719072] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.058 [2024-02-13 08:30:43.719169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.058 [2024-02-13 08:30:43.719185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.058 [2024-02-13 08:30:43.719191] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.058 [2024-02-13 08:30:43.719197] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.058 [2024-02-13 08:30:43.719212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.058 qpair failed and we were unable to recover it. 00:30:10.058 [2024-02-13 08:30:43.729130] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.058 [2024-02-13 08:30:43.729237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.058 [2024-02-13 08:30:43.729254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.058 [2024-02-13 08:30:43.729260] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.058 [2024-02-13 08:30:43.729266] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.058 [2024-02-13 08:30:43.729281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.058 qpair failed and we were unable to recover it. 00:30:10.058 [2024-02-13 08:30:43.739108] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.058 [2024-02-13 08:30:43.739200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.058 [2024-02-13 08:30:43.739216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.058 [2024-02-13 08:30:43.739223] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.058 [2024-02-13 08:30:43.739229] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.058 [2024-02-13 08:30:43.739244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.058 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.749197] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.749299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.749315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.749325] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.749331] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.749346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.759222] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.759363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.759379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.759386] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.759392] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.759407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.769236] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.769326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.769342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.769349] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.769355] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.769371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.779280] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.779376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.779392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.779399] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.779405] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.779420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.789297] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.789393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.789409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.789415] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.789421] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.789436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.799322] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.318 [2024-02-13 08:30:43.799420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.318 [2024-02-13 08:30:43.799436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.318 [2024-02-13 08:30:43.799443] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.318 [2024-02-13 08:30:43.799449] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.318 [2024-02-13 08:30:43.799464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.318 qpair failed and we were unable to recover it. 00:30:10.318 [2024-02-13 08:30:43.809284] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.809400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.809416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.809423] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.809429] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.809445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.819374] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.819469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.819485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.819492] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.819498] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.819513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.829362] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.829454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.829470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.829477] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.829482] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.829497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.839478] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.839574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.839593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.839599] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.839605] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.839620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.849427] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.849533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.849550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.849556] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.849562] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.849578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.859565] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.859666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.859682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.859689] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.859694] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.859710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.869548] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.869650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.869666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.869673] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.869678] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.869694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.879593] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.879691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.879708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.879714] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.879720] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.879739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.889627] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.889797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.889814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.889821] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.889826] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.889843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.899617] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.899711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.899727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.899734] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.899740] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.899756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.909676] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.909775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.909791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.909797] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.909803] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.909818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.919706] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.919804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.919820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.919826] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.919832] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.919847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.319 qpair failed and we were unable to recover it. 00:30:10.319 [2024-02-13 08:30:43.929730] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.319 [2024-02-13 08:30:43.929824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.319 [2024-02-13 08:30:43.929843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.319 [2024-02-13 08:30:43.929849] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.319 [2024-02-13 08:30:43.929854] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.319 [2024-02-13 08:30:43.929871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.939765] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.939864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.939880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.939887] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.939893] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.939908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.949869] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.949963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.949979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.949986] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.949991] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.950006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.959824] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.959923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.959939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.959945] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.959951] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.959966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.969843] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.969935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.969952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.969958] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.969964] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.969982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.979871] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.979972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.979988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.979995] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.980001] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.980016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.989894] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:43.989990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:43.990006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:43.990013] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:43.990018] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:43.990033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.320 [2024-02-13 08:30:43.999932] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.320 [2024-02-13 08:30:44.000027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.320 [2024-02-13 08:30:44.000043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.320 [2024-02-13 08:30:44.000050] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.320 [2024-02-13 08:30:44.000055] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.320 [2024-02-13 08:30:44.000071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.320 qpair failed and we were unable to recover it. 00:30:10.580 [2024-02-13 08:30:44.009937] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.580 [2024-02-13 08:30:44.010033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.580 [2024-02-13 08:30:44.010049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.580 [2024-02-13 08:30:44.010056] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.580 [2024-02-13 08:30:44.010061] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.580 [2024-02-13 08:30:44.010078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.580 qpair failed and we were unable to recover it. 00:30:10.580 [2024-02-13 08:30:44.019917] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.580 [2024-02-13 08:30:44.020021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.580 [2024-02-13 08:30:44.020037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.020044] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.020050] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.020065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.030033] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.030129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.030145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.030152] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.030158] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.030174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.040028] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.040120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.040136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.040143] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.040149] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.040164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.050076] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.050175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.050191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.050198] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.050203] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.050218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.060095] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.060193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.060208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.060215] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.060224] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.060239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.070119] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.070209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.070225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.070232] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.070237] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.070253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.080168] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.080266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.080282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.080289] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.080294] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.080310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.090194] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.090292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.090309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.090315] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.090321] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.090336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.100269] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.100368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.100384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.100391] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.100396] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.100412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.110240] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.110334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.110350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.110357] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.110363] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.110378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.120273] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.120370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.120386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.120393] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.120398] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.120413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.130287] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.130384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.130400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.130407] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.130412] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.130428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.140326] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.581 [2024-02-13 08:30:44.140419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.581 [2024-02-13 08:30:44.140435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.581 [2024-02-13 08:30:44.140442] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.581 [2024-02-13 08:30:44.140448] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.581 [2024-02-13 08:30:44.140463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.581 qpair failed and we were unable to recover it. 00:30:10.581 [2024-02-13 08:30:44.150342] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.150432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.150448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.150458] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.150463] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.150479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.160374] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.160512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.160528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.160534] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.160540] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.160555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.170401] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.170502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.170518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.170524] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.170529] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.170545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.180438] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.180527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.180543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.180550] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.180555] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.180571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.190406] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.190503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.190519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.190525] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.190531] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.190546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.200487] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.200584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.200600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.200607] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.200613] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.200628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.210530] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.210638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.210658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.210665] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.210671] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.210686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.220546] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.220641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.220662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.220669] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.220674] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.220689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.230561] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.230678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.230693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.230700] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.230705] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.230721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.240699] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.240797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.240813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.240823] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.240828] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.240844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.250623] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.250725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.250742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.250748] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.250754] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.250769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.582 [2024-02-13 08:30:44.260694] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.582 [2024-02-13 08:30:44.260801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.582 [2024-02-13 08:30:44.260817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.582 [2024-02-13 08:30:44.260823] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.582 [2024-02-13 08:30:44.260829] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.582 [2024-02-13 08:30:44.260845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.582 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.270685] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.270782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.270798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.270805] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.270811] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.270826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.280721] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.280821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.280837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.280844] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.280849] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.280864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.290663] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.290759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.290775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.290781] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.290787] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.290802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.300784] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.300987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.301004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.301010] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.301017] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.301032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.310807] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.310903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.310918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.310924] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.310930] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.310946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.320841] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.320936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.320952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.320959] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.320965] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.320980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.330877] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.330986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.331005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.331011] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.331017] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.331032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.340895] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.341039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.341055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.843 [2024-02-13 08:30:44.341062] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.843 [2024-02-13 08:30:44.341067] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.843 [2024-02-13 08:30:44.341082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.843 qpair failed and we were unable to recover it. 00:30:10.843 [2024-02-13 08:30:44.350927] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.843 [2024-02-13 08:30:44.351023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.843 [2024-02-13 08:30:44.351041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.351047] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.351053] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.351068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.360955] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.361068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.361084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.361090] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.361095] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.361111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.370982] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.371089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.371105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.371112] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.371117] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.371136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.380943] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.381041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.381058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.381064] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.381070] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.381085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.391064] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.391159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.391175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.391181] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.391187] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.391202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.401109] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.401228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.401244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.401251] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.401256] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.401271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.411086] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.411183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.411198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.411205] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.411210] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.411225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.421044] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.421181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.421202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.421208] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.421214] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.421229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.431160] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.431274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.431290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.431296] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.431302] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.431317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.441199] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.441293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.441309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.441316] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.441321] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.441337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.451200] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.451298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.451315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.451322] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.451327] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.451343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.461239] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.461331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.461348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.461354] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.461360] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.461378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.471256] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.471354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.844 [2024-02-13 08:30:44.471370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.844 [2024-02-13 08:30:44.471376] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.844 [2024-02-13 08:30:44.471382] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.844 [2024-02-13 08:30:44.471398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.844 qpair failed and we were unable to recover it. 00:30:10.844 [2024-02-13 08:30:44.481311] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.844 [2024-02-13 08:30:44.481405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.845 [2024-02-13 08:30:44.481422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.845 [2024-02-13 08:30:44.481428] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.845 [2024-02-13 08:30:44.481434] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.845 [2024-02-13 08:30:44.481449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.845 qpair failed and we were unable to recover it. 00:30:10.845 [2024-02-13 08:30:44.491308] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.845 [2024-02-13 08:30:44.491406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.845 [2024-02-13 08:30:44.491423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.845 [2024-02-13 08:30:44.491429] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.845 [2024-02-13 08:30:44.491435] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.845 [2024-02-13 08:30:44.491450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.845 qpair failed and we were unable to recover it. 00:30:10.845 [2024-02-13 08:30:44.501347] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.845 [2024-02-13 08:30:44.501445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.845 [2024-02-13 08:30:44.501460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.845 [2024-02-13 08:30:44.501467] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.845 [2024-02-13 08:30:44.501473] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.845 [2024-02-13 08:30:44.501488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.845 qpair failed and we were unable to recover it. 00:30:10.845 [2024-02-13 08:30:44.511386] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.845 [2024-02-13 08:30:44.511478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.845 [2024-02-13 08:30:44.511497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.845 [2024-02-13 08:30:44.511503] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.845 [2024-02-13 08:30:44.511509] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.845 [2024-02-13 08:30:44.511524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.845 qpair failed and we were unable to recover it. 00:30:10.845 [2024-02-13 08:30:44.521425] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.845 [2024-02-13 08:30:44.521521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.845 [2024-02-13 08:30:44.521537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.845 [2024-02-13 08:30:44.521544] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.845 [2024-02-13 08:30:44.521550] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:10.845 [2024-02-13 08:30:44.521565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:10.845 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.531428] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.531538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.531554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.531561] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.531566] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.531581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.541470] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.541566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.541581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.541588] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.541594] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.541609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.551486] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.551583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.551599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.551606] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.551615] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.551630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.561529] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.561624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.561641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.561652] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.561658] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.561673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.571532] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.571630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.571650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.571657] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.571663] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.571679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.105 qpair failed and we were unable to recover it. 00:30:11.105 [2024-02-13 08:30:44.581569] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.105 [2024-02-13 08:30:44.581665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.105 [2024-02-13 08:30:44.581681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.105 [2024-02-13 08:30:44.581688] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.105 [2024-02-13 08:30:44.581694] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.105 [2024-02-13 08:30:44.581709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.591678] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.591811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.591828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.591834] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.591840] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.591856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.601649] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.601766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.601782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.601789] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.601794] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.601809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.611668] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.611771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.611786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.611793] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.611799] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.611814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.621656] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.621753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.621771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.621778] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.621783] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.621798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.631708] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.631802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.631817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.631824] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.631829] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.631844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.641728] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.641828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.641844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.641851] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.641859] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.641874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.651760] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.651854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.651870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.651877] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.651882] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.651897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.661786] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.661882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.661897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.661904] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.661910] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.661925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.671842] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.671943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.671959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.671966] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.671972] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.671987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.681876] ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.106 [2024-02-13 08:30:44.681998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.106 [2024-02-13 08:30:44.682014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.106 [2024-02-13 08:30:44.682021] nvme_tcp.c:2339:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.106 [2024-02-13 08:30:44.682026] nvme_tcp.c:2136:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb0b8000b90 00:30:11.106 [2024-02-13 08:30:44.682041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.106 qpair failed and we were unable to recover it. 00:30:11.106 [2024-02-13 08:30:44.682066] nvme_ctrlr.c:4325:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:11.106 A controller has encountered a failure and is being reset. 00:30:11.106 [2024-02-13 08:30:44.682121] nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e7fe0 (9): Bad file descriptor 00:30:11.106 Controller properly reset. 00:30:11.366 Initializing NVMe Controllers 00:30:11.366 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:11.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:11.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:11.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:11.366 Initialization complete. Launching workers. 00:30:11.366 Starting thread on core 1 00:30:11.366 Starting thread on core 2 00:30:11.366 Starting thread on core 3 00:30:11.366 Starting thread on core 0 00:30:11.366 08:30:44 -- host/target_disconnect.sh@59 -- # sync 00:30:11.366 00:30:11.366 real 0m11.252s 00:30:11.366 user 0m20.611s 00:30:11.366 sys 0m4.472s 00:30:11.366 08:30:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:11.366 08:30:44 -- common/autotest_common.sh@10 -- # set +x 00:30:11.366 ************************************ 00:30:11.366 END TEST nvmf_target_disconnect_tc2 00:30:11.366 ************************************ 00:30:11.366 08:30:44 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:11.366 08:30:44 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:11.366 08:30:44 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:11.366 08:30:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:11.366 08:30:44 -- nvmf/common.sh@116 -- # sync 00:30:11.366 08:30:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:11.366 08:30:44 -- nvmf/common.sh@119 -- # set +e 00:30:11.366 08:30:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:11.366 08:30:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:11.366 rmmod nvme_tcp 00:30:11.366 rmmod nvme_fabrics 00:30:11.366 rmmod nvme_keyring 00:30:11.366 08:30:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:11.366 08:30:44 -- nvmf/common.sh@123 -- # set -e 00:30:11.366 08:30:44 -- nvmf/common.sh@124 -- # return 0 00:30:11.367 08:30:44 -- nvmf/common.sh@477 -- # '[' -n 2445940 ']' 00:30:11.367 08:30:44 -- nvmf/common.sh@478 -- # killprocess 2445940 00:30:11.367 08:30:44 -- common/autotest_common.sh@924 -- # '[' -z 2445940 ']' 00:30:11.367 08:30:44 -- common/autotest_common.sh@928 -- # kill -0 2445940 00:30:11.367 08:30:44 -- common/autotest_common.sh@929 -- # uname 00:30:11.367 08:30:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:11.367 08:30:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2445940 00:30:11.367 08:30:44 -- common/autotest_common.sh@930 -- # process_name=reactor_4 00:30:11.367 08:30:44 -- common/autotest_common.sh@934 -- # '[' reactor_4 = sudo ']' 00:30:11.367 08:30:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2445940' 00:30:11.367 killing process with pid 2445940 00:30:11.367 08:30:44 -- common/autotest_common.sh@943 -- # kill 2445940 00:30:11.367 08:30:44 -- common/autotest_common.sh@948 -- # wait 2445940 00:30:11.626 08:30:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:11.626 08:30:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:11.626 08:30:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:11.626 08:30:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.626 08:30:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:11.626 08:30:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.626 08:30:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.626 08:30:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.165 08:30:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:14.165 00:30:14.165 real 0m19.652s 00:30:14.165 user 0m47.643s 00:30:14.165 sys 0m9.171s 00:30:14.165 08:30:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:14.165 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.165 ************************************ 00:30:14.165 END TEST nvmf_target_disconnect 00:30:14.165 ************************************ 00:30:14.165 08:30:47 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:14.165 08:30:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.165 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.165 08:30:47 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:14.165 00:30:14.165 real 23m29.543s 00:30:14.165 user 61m52.196s 00:30:14.165 sys 6m10.704s 00:30:14.165 08:30:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:14.165 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.165 ************************************ 00:30:14.165 END TEST nvmf_tcp 00:30:14.165 ************************************ 00:30:14.165 08:30:47 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:14.165 08:30:47 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.165 08:30:47 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:14.165 08:30:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:14.165 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.165 ************************************ 00:30:14.165 START TEST spdkcli_nvmf_tcp 00:30:14.165 ************************************ 00:30:14.165 08:30:47 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.165 * Looking for test storage... 00:30:14.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:14.165 08:30:47 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:14.165 08:30:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:14.165 08:30:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:14.165 08:30:47 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.165 08:30:47 -- nvmf/common.sh@7 -- # uname -s 00:30:14.165 08:30:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.165 08:30:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.165 08:30:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.165 08:30:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.165 08:30:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.165 08:30:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.165 08:30:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.165 08:30:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.165 08:30:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.165 08:30:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.165 08:30:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:14.165 08:30:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:14.165 08:30:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.165 08:30:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.165 08:30:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.165 08:30:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.165 08:30:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.165 08:30:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.165 08:30:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.165 08:30:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.165 08:30:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.165 08:30:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.165 08:30:47 -- paths/export.sh@5 -- # export PATH 00:30:14.165 08:30:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.165 08:30:47 -- nvmf/common.sh@46 -- # : 0 00:30:14.165 08:30:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:14.165 08:30:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:14.165 08:30:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:14.165 08:30:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.165 08:30:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.166 08:30:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:14.166 08:30:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:14.166 08:30:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:14.166 08:30:47 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:14.166 08:30:47 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:14.166 08:30:47 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:14.166 08:30:47 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:14.166 08:30:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:14.166 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.166 08:30:47 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:14.166 08:30:47 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2447467 00:30:14.166 08:30:47 -- spdkcli/common.sh@34 -- # waitforlisten 2447467 00:30:14.166 08:30:47 -- common/autotest_common.sh@817 -- # '[' -z 2447467 ']' 00:30:14.166 08:30:47 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:14.166 08:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.166 08:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:14.166 08:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.166 08:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:14.166 08:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.166 [2024-02-13 08:30:47.522600] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:14.166 [2024-02-13 08:30:47.522657] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447467 ] 00:30:14.166 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.166 [2024-02-13 08:30:47.579417] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.166 [2024-02-13 08:30:47.654729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.166 [2024-02-13 08:30:47.654879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.166 [2024-02-13 08:30:47.654882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.735 08:30:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:14.735 08:30:48 -- common/autotest_common.sh@850 -- # return 0 00:30:14.735 08:30:48 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:14.735 08:30:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.735 08:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 08:30:48 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:14.735 08:30:48 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:14.735 08:30:48 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:14.735 08:30:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:14.735 08:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 08:30:48 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:14.735 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:14.735 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:14.735 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:14.735 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:14.735 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:14.735 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:14.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:14.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:14.735 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:14.735 ' 00:30:15.304 [2024-02-13 08:30:48.683274] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:17.210 [2024-02-13 08:30:50.718531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.590 [2024-02-13 08:30:51.894414] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:20.531 [2024-02-13 08:30:54.057198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:22.438 [2024-02-13 08:30:55.915165] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:23.818 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:23.818 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:23.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.818 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:23.818 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:23.818 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:23.818 08:30:57 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:23.818 08:30:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:23.818 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.818 08:30:57 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:23.818 08:30:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:23.818 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.818 08:30:57 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:23.818 08:30:57 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:24.387 08:30:57 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:24.387 08:30:57 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:24.387 08:30:57 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:24.387 08:30:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:24.387 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.387 08:30:57 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:24.387 08:30:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:24.387 08:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.387 08:30:57 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:24.387 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:24.387 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.387 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:24.387 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:24.387 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:24.387 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:24.387 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:24.387 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:24.387 ' 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:29.678 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:29.678 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:29.678 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:29.678 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:29.678 08:31:02 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:29.678 08:31:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:29.678 08:31:02 -- common/autotest_common.sh@10 -- # set +x 00:30:29.678 08:31:02 -- spdkcli/nvmf.sh@90 -- # killprocess 2447467 00:30:29.678 08:31:02 -- common/autotest_common.sh@924 -- # '[' -z 2447467 ']' 00:30:29.678 08:31:02 -- common/autotest_common.sh@928 -- # kill -0 2447467 00:30:29.678 08:31:02 -- common/autotest_common.sh@929 -- # uname 00:30:29.678 08:31:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:29.678 08:31:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2447467 00:30:29.678 08:31:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:29.678 08:31:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:29.678 08:31:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2447467' 00:30:29.678 killing process with pid 2447467 00:30:29.678 08:31:02 -- common/autotest_common.sh@943 -- # kill 2447467 00:30:29.678 [2024-02-13 08:31:02.950200] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:29.678 08:31:02 -- common/autotest_common.sh@948 -- # wait 2447467 00:30:29.678 08:31:03 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:29.678 08:31:03 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:29.678 08:31:03 -- spdkcli/common.sh@13 -- # '[' -n 2447467 ']' 00:30:29.678 08:31:03 -- spdkcli/common.sh@14 -- # killprocess 2447467 00:30:29.678 08:31:03 -- common/autotest_common.sh@924 -- # '[' -z 2447467 ']' 00:30:29.678 08:31:03 -- common/autotest_common.sh@928 -- # kill -0 2447467 00:30:29.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2447467) - No such process 00:30:29.678 08:31:03 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2447467 is not found' 00:30:29.678 Process with pid 2447467 is not found 00:30:29.678 08:31:03 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:29.678 08:31:03 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:29.678 08:31:03 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:29.678 00:30:29.678 real 0m15.778s 00:30:29.678 user 0m32.615s 00:30:29.678 sys 0m0.679s 00:30:29.678 08:31:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:29.678 08:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:29.678 ************************************ 00:30:29.678 END TEST spdkcli_nvmf_tcp 00:30:29.678 ************************************ 00:30:29.678 08:31:03 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.678 08:31:03 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:29.678 08:31:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:29.678 08:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:29.678 ************************************ 00:30:29.678 START TEST nvmf_identify_passthru 00:30:29.678 ************************************ 00:30:29.678 08:31:03 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.678 * Looking for test storage... 00:30:29.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.678 08:31:03 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.678 08:31:03 -- nvmf/common.sh@7 -- # uname -s 00:30:29.678 08:31:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.678 08:31:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.678 08:31:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.678 08:31:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.678 08:31:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.678 08:31:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.678 08:31:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.678 08:31:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.678 08:31:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.678 08:31:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.678 08:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:29.678 08:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:29.678 08:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.678 08:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.678 08:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.678 08:31:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.678 08:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.679 08:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.679 08:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.679 08:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@5 -- # export PATH 00:30:29.679 08:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- nvmf/common.sh@46 -- # : 0 00:30:29.679 08:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:29.679 08:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:29.679 08:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:29.679 08:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.679 08:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.679 08:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:29.679 08:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:29.679 08:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:29.679 08:31:03 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.679 08:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.679 08:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.679 08:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.679 08:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- paths/export.sh@5 -- # export PATH 00:30:29.679 08:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.679 08:31:03 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:29.679 08:31:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:29.679 08:31:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.679 08:31:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:29.679 08:31:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:29.679 08:31:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:29.679 08:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.679 08:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.679 08:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.679 08:31:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:29.679 08:31:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:29.679 08:31:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:29.679 08:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:36.250 08:31:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:36.250 08:31:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:36.250 08:31:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:36.250 08:31:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:36.250 08:31:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:36.250 08:31:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:36.250 08:31:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:36.250 08:31:08 -- nvmf/common.sh@294 -- # net_devs=() 00:30:36.250 08:31:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:36.250 08:31:08 -- nvmf/common.sh@295 -- # e810=() 00:30:36.250 08:31:08 -- nvmf/common.sh@295 -- # local -ga e810 00:30:36.251 08:31:08 -- nvmf/common.sh@296 -- # x722=() 00:30:36.251 08:31:08 -- nvmf/common.sh@296 -- # local -ga x722 00:30:36.251 08:31:08 -- nvmf/common.sh@297 -- # mlx=() 00:30:36.251 08:31:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:36.251 08:31:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.251 08:31:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:36.251 08:31:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:36.251 08:31:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:36.251 08:31:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:36.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:36.251 08:31:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:36.251 08:31:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:36.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:36.251 08:31:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:36.251 08:31:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.251 08:31:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.251 08:31:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:36.251 Found net devices under 0000:af:00.0: cvl_0_0 00:30:36.251 08:31:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.251 08:31:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:36.251 08:31:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.251 08:31:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.251 08:31:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:36.251 Found net devices under 0000:af:00.1: cvl_0_1 00:30:36.251 08:31:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.251 08:31:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:36.251 08:31:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:36.251 08:31:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:36.251 08:31:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.251 08:31:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.251 08:31:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.251 08:31:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:36.251 08:31:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.251 08:31:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.251 08:31:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:36.251 08:31:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.251 08:31:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.251 08:31:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:36.251 08:31:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:36.251 08:31:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.251 08:31:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.251 08:31:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.251 08:31:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.251 08:31:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:36.251 08:31:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.251 08:31:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.251 08:31:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.251 08:31:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:36.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:30:36.251 00:30:36.251 --- 10.0.0.2 ping statistics --- 00:30:36.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.251 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:30:36.251 08:31:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:30:36.251 00:30:36.251 --- 10.0.0.1 ping statistics --- 00:30:36.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.251 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:30:36.251 08:31:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.251 08:31:09 -- nvmf/common.sh@410 -- # return 0 00:30:36.251 08:31:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:36.251 08:31:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.251 08:31:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:36.251 08:31:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:36.251 08:31:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.251 08:31:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:36.251 08:31:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:36.251 08:31:09 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:36.251 08:31:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:36.251 08:31:09 -- common/autotest_common.sh@10 -- # set +x 00:30:36.251 08:31:09 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:36.251 08:31:09 -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:36.251 08:31:09 -- common/autotest_common.sh@1507 -- # local bdfs 00:30:36.251 08:31:09 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:30:36.251 08:31:09 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:30:36.251 08:31:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:36.251 08:31:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:30:36.251 08:31:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:36.251 08:31:09 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:36.251 08:31:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:36.251 08:31:09 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:36.251 08:31:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:30:36.251 08:31:09 -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:30:36.251 08:31:09 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:30:36.251 08:31:09 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:30:36.251 08:31:09 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:36.251 08:31:09 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:36.251 08:31:09 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:36.251 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.446 08:31:13 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:30:40.446 08:31:13 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:40.446 08:31:13 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:40.446 08:31:13 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:40.446 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.738 08:31:17 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:43.738 08:31:17 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:43.738 08:31:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:43.738 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:43.997 08:31:17 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:43.997 08:31:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:43.997 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:43.997 08:31:17 -- target/identify_passthru.sh@31 -- # nvmfpid=2454771 00:30:43.997 08:31:17 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.997 08:31:17 -- target/identify_passthru.sh@35 -- # waitforlisten 2454771 00:30:43.997 08:31:17 -- common/autotest_common.sh@817 -- # '[' -z 2454771 ']' 00:30:43.997 08:31:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.997 08:31:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:43.997 08:31:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.997 08:31:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:43.998 08:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:43.998 08:31:17 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:43.998 [2024-02-13 08:31:17.503081] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:43.998 [2024-02-13 08:31:17.503135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.998 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.998 [2024-02-13 08:31:17.566402] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.998 [2024-02-13 08:31:17.642733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:43.998 [2024-02-13 08:31:17.642841] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.998 [2024-02-13 08:31:17.642849] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.998 [2024-02-13 08:31:17.642855] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.998 [2024-02-13 08:31:17.642901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.998 [2024-02-13 08:31:17.642916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.998 [2024-02-13 08:31:17.643019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.998 [2024-02-13 08:31:17.643020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.936 08:31:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:44.936 08:31:18 -- common/autotest_common.sh@850 -- # return 0 00:30:44.936 08:31:18 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:44.936 08:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.936 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.936 INFO: Log level set to 20 00:30:44.936 INFO: Requests: 00:30:44.936 { 00:30:44.936 "jsonrpc": "2.0", 00:30:44.936 "method": "nvmf_set_config", 00:30:44.936 "id": 1, 00:30:44.936 "params": { 00:30:44.936 "admin_cmd_passthru": { 00:30:44.936 "identify_ctrlr": true 00:30:44.936 } 00:30:44.936 } 00:30:44.936 } 00:30:44.936 00:30:44.936 INFO: response: 00:30:44.936 { 00:30:44.936 "jsonrpc": "2.0", 00:30:44.936 "id": 1, 00:30:44.936 "result": true 00:30:44.936 } 00:30:44.936 00:30:44.936 08:31:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.936 08:31:18 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:44.937 08:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.937 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.937 INFO: Setting log level to 20 00:30:44.937 INFO: Setting log level to 20 00:30:44.937 INFO: Log level set to 20 00:30:44.937 INFO: Log level set to 20 00:30:44.937 INFO: Requests: 00:30:44.937 { 00:30:44.937 "jsonrpc": "2.0", 00:30:44.937 "method": "framework_start_init", 00:30:44.937 "id": 1 00:30:44.937 } 00:30:44.937 00:30:44.937 INFO: Requests: 00:30:44.937 { 00:30:44.937 "jsonrpc": "2.0", 00:30:44.937 "method": "framework_start_init", 00:30:44.937 "id": 1 00:30:44.937 } 00:30:44.937 00:30:44.937 [2024-02-13 08:31:18.390525] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:44.937 INFO: response: 00:30:44.937 { 00:30:44.937 "jsonrpc": "2.0", 00:30:44.937 "id": 1, 00:30:44.937 "result": true 00:30:44.937 } 00:30:44.937 00:30:44.937 INFO: response: 00:30:44.937 { 00:30:44.937 "jsonrpc": "2.0", 00:30:44.937 "id": 1, 00:30:44.937 "result": true 00:30:44.937 } 00:30:44.937 00:30:44.937 08:31:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.937 08:31:18 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.937 08:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.937 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.937 INFO: Setting log level to 40 00:30:44.937 INFO: Setting log level to 40 00:30:44.937 INFO: Setting log level to 40 00:30:44.937 [2024-02-13 08:31:18.403903] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.937 08:31:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:44.937 08:31:18 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:44.937 08:31:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:44.937 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.937 08:31:18 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:30:44.937 08:31:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:44.937 08:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 Nvme0n1 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:48.249 08:31:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.249 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:48.249 08:31:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.249 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.249 08:31:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.249 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 [2024-02-13 08:31:21.301055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:48.249 08:31:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.249 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 [2024-02-13 08:31:21.308845] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:48.249 [ 00:30:48.249 { 00:30:48.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:48.249 "subtype": "Discovery", 00:30:48.249 "listen_addresses": [], 00:30:48.249 "allow_any_host": true, 00:30:48.249 "hosts": [] 00:30:48.249 }, 00:30:48.249 { 00:30:48.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.249 "subtype": "NVMe", 00:30:48.249 "listen_addresses": [ 00:30:48.249 { 00:30:48.249 "transport": "TCP", 00:30:48.249 "trtype": "TCP", 00:30:48.249 "adrfam": "IPv4", 00:30:48.249 "traddr": "10.0.0.2", 00:30:48.249 "trsvcid": "4420" 00:30:48.249 } 00:30:48.249 ], 00:30:48.249 "allow_any_host": true, 00:30:48.249 "hosts": [], 00:30:48.249 "serial_number": "SPDK00000000000001", 00:30:48.249 "model_number": "SPDK bdev Controller", 00:30:48.249 "max_namespaces": 1, 00:30:48.249 "min_cntlid": 1, 00:30:48.249 "max_cntlid": 65519, 00:30:48.249 "namespaces": [ 00:30:48.249 { 00:30:48.249 "nsid": 1, 00:30:48.249 "bdev_name": "Nvme0n1", 00:30:48.249 "name": "Nvme0n1", 00:30:48.249 "nguid": "19F6AE51553A4DEA80C61798715142BB", 00:30:48.249 "uuid": "19f6ae51-553a-4dea-80c6-1798715142bb" 00:30:48.249 } 00:30:48.249 ] 00:30:48.249 } 00:30:48.249 ] 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:48.249 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.249 08:31:21 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:30:48.249 08:31:21 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:48.249 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.249 08:31:21 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:48.249 08:31:21 -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:48.249 08:31:21 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.249 08:31:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.249 08:31:21 -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 08:31:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.249 08:31:21 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:48.249 08:31:21 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:48.249 08:31:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:48.249 08:31:21 -- nvmf/common.sh@116 -- # sync 00:30:48.249 08:31:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:48.249 08:31:21 -- nvmf/common.sh@119 -- # set +e 00:30:48.249 08:31:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:48.249 08:31:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:48.249 rmmod nvme_tcp 00:30:48.249 rmmod nvme_fabrics 00:30:48.249 rmmod nvme_keyring 00:30:48.249 08:31:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:48.249 08:31:21 -- nvmf/common.sh@123 -- # set -e 00:30:48.249 08:31:21 -- nvmf/common.sh@124 -- # return 0 00:30:48.249 08:31:21 -- nvmf/common.sh@477 -- # '[' -n 2454771 ']' 00:30:48.249 08:31:21 -- nvmf/common.sh@478 -- # killprocess 2454771 00:30:48.249 08:31:21 -- common/autotest_common.sh@924 -- # '[' -z 2454771 ']' 00:30:48.249 08:31:21 -- common/autotest_common.sh@928 -- # kill -0 2454771 00:30:48.249 08:31:21 -- common/autotest_common.sh@929 -- # uname 00:30:48.249 08:31:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:48.249 08:31:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2454771 00:30:48.249 08:31:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:48.249 08:31:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:48.249 08:31:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2454771' 00:30:48.249 killing process with pid 2454771 00:30:48.249 08:31:21 -- common/autotest_common.sh@943 -- # kill 2454771 00:30:48.249 [2024-02-13 08:31:21.687484] app.c: 881:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:48.249 08:31:21 -- common/autotest_common.sh@948 -- # wait 2454771 00:30:49.628 08:31:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:49.628 08:31:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:49.628 08:31:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:49.628 08:31:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.628 08:31:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:49.628 08:31:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.628 08:31:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:49.628 08:31:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.165 08:31:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:52.165 00:30:52.165 real 0m22.054s 00:30:52.165 user 0m29.618s 00:30:52.165 sys 0m5.056s 00:30:52.165 08:31:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.165 08:31:25 -- common/autotest_common.sh@10 -- # set +x 00:30:52.165 ************************************ 00:30:52.165 END TEST nvmf_identify_passthru 00:30:52.165 ************************************ 00:30:52.165 08:31:25 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:52.165 08:31:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:52.165 08:31:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:52.165 08:31:25 -- common/autotest_common.sh@10 -- # set +x 00:30:52.165 ************************************ 00:30:52.165 START TEST nvmf_dif 00:30:52.165 ************************************ 00:30:52.165 08:31:25 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:52.165 * Looking for test storage... 00:30:52.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.165 08:31:25 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.165 08:31:25 -- nvmf/common.sh@7 -- # uname -s 00:30:52.165 08:31:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.165 08:31:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.165 08:31:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.165 08:31:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.165 08:31:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.165 08:31:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.165 08:31:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.165 08:31:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.165 08:31:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.165 08:31:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.165 08:31:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:52.165 08:31:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:52.165 08:31:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.165 08:31:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.165 08:31:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.165 08:31:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.165 08:31:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.165 08:31:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.165 08:31:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.165 08:31:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.165 08:31:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.165 08:31:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.165 08:31:25 -- paths/export.sh@5 -- # export PATH 00:30:52.165 08:31:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.165 08:31:25 -- nvmf/common.sh@46 -- # : 0 00:30:52.165 08:31:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:52.165 08:31:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:52.165 08:31:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:52.165 08:31:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.165 08:31:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.165 08:31:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:52.165 08:31:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:52.165 08:31:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:52.165 08:31:25 -- target/dif.sh@15 -- # NULL_META=16 00:30:52.165 08:31:25 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:52.165 08:31:25 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:52.165 08:31:25 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:52.165 08:31:25 -- target/dif.sh@135 -- # nvmftestinit 00:30:52.165 08:31:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:52.165 08:31:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.165 08:31:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:52.165 08:31:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:52.165 08:31:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:52.165 08:31:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.165 08:31:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:52.165 08:31:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.165 08:31:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:52.165 08:31:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:52.166 08:31:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:52.166 08:31:25 -- common/autotest_common.sh@10 -- # set +x 00:30:58.737 08:31:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:58.737 08:31:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:58.737 08:31:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:58.737 08:31:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:58.737 08:31:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:58.737 08:31:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:58.737 08:31:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:58.737 08:31:31 -- nvmf/common.sh@294 -- # net_devs=() 00:30:58.738 08:31:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:58.738 08:31:31 -- nvmf/common.sh@295 -- # e810=() 00:30:58.738 08:31:31 -- nvmf/common.sh@295 -- # local -ga e810 00:30:58.738 08:31:31 -- nvmf/common.sh@296 -- # x722=() 00:30:58.738 08:31:31 -- nvmf/common.sh@296 -- # local -ga x722 00:30:58.738 08:31:31 -- nvmf/common.sh@297 -- # mlx=() 00:30:58.738 08:31:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:58.738 08:31:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.738 08:31:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:58.738 08:31:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:58.738 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:58.738 08:31:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:58.738 08:31:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:58.738 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:58.738 08:31:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:58.738 08:31:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.738 08:31:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.738 08:31:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:58.738 Found net devices under 0000:af:00.0: cvl_0_0 00:30:58.738 08:31:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:58.738 08:31:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.738 08:31:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.738 08:31:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:58.738 Found net devices under 0000:af:00.1: cvl_0_1 00:30:58.738 08:31:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:58.738 08:31:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:58.738 08:31:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:58.738 08:31:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.738 08:31:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.738 08:31:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:58.738 08:31:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.738 08:31:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.738 08:31:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:58.738 08:31:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.738 08:31:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.738 08:31:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:58.738 08:31:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:58.738 08:31:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.738 08:31:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.738 08:31:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.738 08:31:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.738 08:31:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:58.738 08:31:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.738 08:31:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.738 08:31:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.738 08:31:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:58.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:30:58.738 00:30:58.738 --- 10.0.0.2 ping statistics --- 00:30:58.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.738 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:30:58.738 08:31:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:30:58.738 00:30:58.738 --- 10.0.0.1 ping statistics --- 00:30:58.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.738 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:30:58.738 08:31:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.738 08:31:31 -- nvmf/common.sh@410 -- # return 0 00:30:58.738 08:31:31 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:58.738 08:31:31 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:00.645 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:31:00.905 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:00.905 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:00.905 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:01.166 08:31:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.166 08:31:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:01.166 08:31:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:01.166 08:31:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.166 08:31:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:01.166 08:31:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:01.166 08:31:34 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:01.166 08:31:34 -- target/dif.sh@137 -- # nvmfappstart 00:31:01.166 08:31:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:01.166 08:31:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:01.166 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:31:01.166 08:31:34 -- nvmf/common.sh@469 -- # nvmfpid=2460882 00:31:01.166 08:31:34 -- nvmf/common.sh@470 -- # waitforlisten 2460882 00:31:01.166 08:31:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:01.166 08:31:34 -- common/autotest_common.sh@817 -- # '[' -z 2460882 ']' 00:31:01.166 08:31:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.166 08:31:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:01.166 08:31:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.166 08:31:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:01.166 08:31:34 -- common/autotest_common.sh@10 -- # set +x 00:31:01.166 [2024-02-13 08:31:34.688748] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:01.166 [2024-02-13 08:31:34.688789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.166 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.166 [2024-02-13 08:31:34.753106] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.166 [2024-02-13 08:31:34.824513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:01.166 [2024-02-13 08:31:34.824621] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.166 [2024-02-13 08:31:34.824629] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.166 [2024-02-13 08:31:34.824635] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.166 [2024-02-13 08:31:34.824662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.105 08:31:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:02.105 08:31:35 -- common/autotest_common.sh@850 -- # return 0 00:31:02.105 08:31:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:02.105 08:31:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:02.105 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.105 08:31:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.105 08:31:35 -- target/dif.sh@139 -- # create_transport 00:31:02.105 08:31:35 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:02.105 08:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.105 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.105 [2024-02-13 08:31:35.532092] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.105 08:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.105 08:31:35 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:02.105 08:31:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:02.105 08:31:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:02.105 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.105 ************************************ 00:31:02.105 START TEST fio_dif_1_default 00:31:02.105 ************************************ 00:31:02.105 08:31:35 -- common/autotest_common.sh@1102 -- # fio_dif_1 00:31:02.105 08:31:35 -- target/dif.sh@86 -- # create_subsystems 0 00:31:02.105 08:31:35 -- target/dif.sh@28 -- # local sub 00:31:02.105 08:31:35 -- target/dif.sh@30 -- # for sub in "$@" 00:31:02.105 08:31:35 -- target/dif.sh@31 -- # create_subsystem 0 00:31:02.105 08:31:35 -- target/dif.sh@18 -- # local sub_id=0 00:31:02.105 08:31:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:02.105 08:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.105 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.105 bdev_null0 00:31:02.105 08:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.105 08:31:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:02.105 08:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.105 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.106 08:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.106 08:31:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:02.106 08:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.106 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.106 08:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.106 08:31:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.106 08:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.106 08:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:02.106 [2024-02-13 08:31:35.580350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.106 08:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.106 08:31:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:02.106 08:31:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:02.106 08:31:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:02.106 08:31:35 -- nvmf/common.sh@520 -- # config=() 00:31:02.106 08:31:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.106 08:31:35 -- nvmf/common.sh@520 -- # local subsystem config 00:31:02.106 08:31:35 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.106 08:31:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:02.106 08:31:35 -- target/dif.sh@82 -- # gen_fio_conf 00:31:02.106 08:31:35 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:02.106 08:31:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:02.106 { 00:31:02.106 "params": { 00:31:02.106 "name": "Nvme$subsystem", 00:31:02.106 "trtype": "$TEST_TRANSPORT", 00:31:02.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.106 "adrfam": "ipv4", 00:31:02.106 "trsvcid": "$NVMF_PORT", 00:31:02.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.106 "hdgst": ${hdgst:-false}, 00:31:02.106 "ddgst": ${ddgst:-false} 00:31:02.106 }, 00:31:02.106 "method": "bdev_nvme_attach_controller" 00:31:02.106 } 00:31:02.106 EOF 00:31:02.106 )") 00:31:02.106 08:31:35 -- target/dif.sh@54 -- # local file 00:31:02.106 08:31:35 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.106 08:31:35 -- target/dif.sh@56 -- # cat 00:31:02.106 08:31:35 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:02.106 08:31:35 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.106 08:31:35 -- common/autotest_common.sh@1318 -- # shift 00:31:02.106 08:31:35 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:02.106 08:31:35 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.106 08:31:35 -- nvmf/common.sh@542 -- # cat 00:31:02.106 08:31:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.106 08:31:35 -- target/dif.sh@72 -- # (( file <= files )) 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:02.106 08:31:35 -- nvmf/common.sh@544 -- # jq . 00:31:02.106 08:31:35 -- nvmf/common.sh@545 -- # IFS=, 00:31:02.106 08:31:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:02.106 "params": { 00:31:02.106 "name": "Nvme0", 00:31:02.106 "trtype": "tcp", 00:31:02.106 "traddr": "10.0.0.2", 00:31:02.106 "adrfam": "ipv4", 00:31:02.106 "trsvcid": "4420", 00:31:02.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.106 "hdgst": false, 00:31:02.106 "ddgst": false 00:31:02.106 }, 00:31:02.106 "method": "bdev_nvme_attach_controller" 00:31:02.106 }' 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:02.106 08:31:35 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:02.106 08:31:35 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:02.106 08:31:35 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:02.106 08:31:35 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:02.106 08:31:35 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:02.106 08:31:35 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.366 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:02.366 fio-3.35 00:31:02.366 Starting 1 thread 00:31:02.366 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.625 [2024-02-13 08:31:36.298319] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:02.625 [2024-02-13 08:31:36.298356] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:14.839 00:31:14.839 filename0: (groupid=0, jobs=1): err= 0: pid=2461264: Tue Feb 13 08:31:46 2024 00:31:14.839 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10015msec) 00:31:14.839 slat (nsec): min=2703, max=33195, avg=5948.15, stdev=1262.40 00:31:14.840 clat (usec): min=41024, max=46705, avg=42061.82, stdev=413.37 00:31:14.840 lat (usec): min=41030, max=46714, avg=42067.77, stdev=413.28 00:31:14.840 clat percentiles (usec): 00:31:14.840 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:14.840 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:14.840 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:14.840 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:31:14.840 | 99.99th=[46924] 00:31:14.840 bw ( KiB/s): min= 352, max= 384, per=99.68%, avg=379.20, stdev=11.72, samples=20 00:31:14.840 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:14.840 lat (msec) : 50=100.00% 00:31:14.840 cpu : usr=95.27%, sys=4.49%, ctx=20, majf=0, minf=245 00:31:14.840 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.840 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.840 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.840 00:31:14.840 Run status group 0 (all jobs): 00:31:14.840 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10015-10015msec 00:31:14.840 08:31:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:14.840 08:31:46 -- target/dif.sh@43 -- # local sub 00:31:14.840 08:31:46 -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.840 08:31:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.840 08:31:46 -- target/dif.sh@36 -- # local sub_id=0 00:31:14.840 08:31:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 00:31:14.840 real 0m11.157s 00:31:14.840 user 0m15.854s 00:31:14.840 sys 0m0.720s 00:31:14.840 08:31:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 ************************************ 00:31:14.840 END TEST fio_dif_1_default 00:31:14.840 ************************************ 00:31:14.840 08:31:46 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:14.840 08:31:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:14.840 08:31:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 ************************************ 00:31:14.840 START TEST fio_dif_1_multi_subsystems 00:31:14.840 ************************************ 00:31:14.840 08:31:46 -- common/autotest_common.sh@1102 -- # fio_dif_1_multi_subsystems 00:31:14.840 08:31:46 -- target/dif.sh@92 -- # local files=1 00:31:14.840 08:31:46 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:14.840 08:31:46 -- target/dif.sh@28 -- # local sub 00:31:14.840 08:31:46 -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.840 08:31:46 -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.840 08:31:46 -- target/dif.sh@18 -- # local sub_id=0 00:31:14.840 08:31:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 bdev_null0 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 [2024-02-13 08:31:46.773599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.840 08:31:46 -- target/dif.sh@31 -- # create_subsystem 1 00:31:14.840 08:31:46 -- target/dif.sh@18 -- # local sub_id=1 00:31:14.840 08:31:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 bdev_null1 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.840 08:31:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.840 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:31:14.840 08:31:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.840 08:31:46 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:14.840 08:31:46 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:14.840 08:31:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:14.840 08:31:46 -- nvmf/common.sh@520 -- # config=() 00:31:14.840 08:31:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.840 08:31:46 -- nvmf/common.sh@520 -- # local subsystem config 00:31:14.840 08:31:46 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.840 08:31:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:14.840 08:31:46 -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.840 08:31:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:14.840 { 00:31:14.840 "params": { 00:31:14.840 "name": "Nvme$subsystem", 00:31:14.840 "trtype": "$TEST_TRANSPORT", 00:31:14.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.840 "adrfam": "ipv4", 00:31:14.840 "trsvcid": "$NVMF_PORT", 00:31:14.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.840 "hdgst": ${hdgst:-false}, 00:31:14.840 "ddgst": ${ddgst:-false} 00:31:14.840 }, 00:31:14.840 "method": "bdev_nvme_attach_controller" 00:31:14.840 } 00:31:14.840 EOF 00:31:14.840 )") 00:31:14.840 08:31:46 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:14.840 08:31:46 -- target/dif.sh@54 -- # local file 00:31:14.840 08:31:46 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.840 08:31:46 -- target/dif.sh@56 -- # cat 00:31:14.840 08:31:46 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:14.840 08:31:46 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.840 08:31:46 -- common/autotest_common.sh@1318 -- # shift 00:31:14.840 08:31:46 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:14.840 08:31:46 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.840 08:31:46 -- nvmf/common.sh@542 -- # cat 00:31:14.840 08:31:46 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.840 08:31:46 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:14.840 08:31:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.840 08:31:46 -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.840 08:31:46 -- target/dif.sh@73 -- # cat 00:31:14.840 08:31:46 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:14.840 08:31:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:14.840 08:31:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:14.840 { 00:31:14.840 "params": { 00:31:14.840 "name": "Nvme$subsystem", 00:31:14.840 "trtype": "$TEST_TRANSPORT", 00:31:14.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.840 "adrfam": "ipv4", 00:31:14.840 "trsvcid": "$NVMF_PORT", 00:31:14.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.840 "hdgst": ${hdgst:-false}, 00:31:14.840 "ddgst": ${ddgst:-false} 00:31:14.840 }, 00:31:14.840 "method": "bdev_nvme_attach_controller" 00:31:14.840 } 00:31:14.840 EOF 00:31:14.840 )") 00:31:14.840 08:31:46 -- target/dif.sh@72 -- # (( file++ )) 00:31:14.840 08:31:46 -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.840 08:31:46 -- nvmf/common.sh@542 -- # cat 00:31:14.840 08:31:46 -- nvmf/common.sh@544 -- # jq . 00:31:14.840 08:31:46 -- nvmf/common.sh@545 -- # IFS=, 00:31:14.840 08:31:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:14.840 "params": { 00:31:14.840 "name": "Nvme0", 00:31:14.840 "trtype": "tcp", 00:31:14.840 "traddr": "10.0.0.2", 00:31:14.841 "adrfam": "ipv4", 00:31:14.841 "trsvcid": "4420", 00:31:14.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.841 "hdgst": false, 00:31:14.841 "ddgst": false 00:31:14.841 }, 00:31:14.841 "method": "bdev_nvme_attach_controller" 00:31:14.841 },{ 00:31:14.841 "params": { 00:31:14.841 "name": "Nvme1", 00:31:14.841 "trtype": "tcp", 00:31:14.841 "traddr": "10.0.0.2", 00:31:14.841 "adrfam": "ipv4", 00:31:14.841 "trsvcid": "4420", 00:31:14.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.841 "hdgst": false, 00:31:14.841 "ddgst": false 00:31:14.841 }, 00:31:14.841 "method": "bdev_nvme_attach_controller" 00:31:14.841 }' 00:31:14.841 08:31:46 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:14.841 08:31:46 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:14.841 08:31:46 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.841 08:31:46 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.841 08:31:46 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:14.841 08:31:46 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:14.841 08:31:46 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:14.841 08:31:46 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:14.841 08:31:46 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.841 08:31:46 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.841 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.841 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.841 fio-3.35 00:31:14.841 Starting 2 threads 00:31:14.841 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.841 [2024-02-13 08:31:47.571188] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:14.841 [2024-02-13 08:31:47.571240] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:24.859 00:31:24.859 filename0: (groupid=0, jobs=1): err= 0: pid=2463234: Tue Feb 13 08:31:57 2024 00:31:24.859 read: IOPS=184, BW=739KiB/s (757kB/s)(7392KiB/10001msec) 00:31:24.859 slat (nsec): min=5894, max=29880, avg=7005.09, stdev=2244.89 00:31:24.859 clat (usec): min=1046, max=44020, avg=21626.82, stdev=20459.49 00:31:24.859 lat (usec): min=1052, max=44050, avg=21633.82, stdev=20458.80 00:31:24.859 clat percentiles (usec): 00:31:24.859 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1074], 00:31:24.859 | 30.00th=[ 1090], 40.00th=[ 1090], 50.00th=[41681], 60.00th=[41681], 00:31:24.859 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:31:24.859 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:31:24.859 | 99.99th=[43779] 00:31:24.859 bw ( KiB/s): min= 672, max= 768, per=50.10%, avg=739.37, stdev=33.55, samples=19 00:31:24.859 iops : min= 168, max= 192, avg=184.84, stdev= 8.39, samples=19 00:31:24.859 lat (msec) : 2=49.78%, 50=50.22% 00:31:24.859 cpu : usr=97.69%, sys=2.02%, ctx=14, majf=0, minf=140 00:31:24.859 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.859 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.859 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.859 filename1: (groupid=0, jobs=1): err= 0: pid=2463235: Tue Feb 13 08:31:57 2024 00:31:24.859 read: IOPS=183, BW=736KiB/s (754kB/s)(7360KiB/10001msec) 00:31:24.859 slat (nsec): min=5883, max=62977, avg=7109.58, stdev=2626.47 00:31:24.859 clat (usec): min=1036, max=43984, avg=21718.61, stdev=20564.54 00:31:24.859 lat (usec): min=1042, max=44016, avg=21725.72, stdev=20563.88 00:31:24.859 clat percentiles (usec): 00:31:24.859 | 1.00th=[ 1045], 5.00th=[ 1057], 10.00th=[ 1057], 20.00th=[ 1074], 00:31:24.859 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[41681], 60.00th=[41681], 00:31:24.859 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42730], 95.00th=[42730], 00:31:24.859 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:31:24.859 | 99.99th=[43779] 00:31:24.859 bw ( KiB/s): min= 640, max= 768, per=49.76%, avg=734.32, stdev=37.67, samples=19 00:31:24.859 iops : min= 160, max= 192, avg=183.58, stdev= 9.42, samples=19 00:31:24.859 lat (msec) : 2=49.78%, 50=50.22% 00:31:24.859 cpu : usr=97.61%, sys=2.02%, ctx=52, majf=0, minf=184 00:31:24.859 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.859 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.859 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.859 00:31:24.859 Run status group 0 (all jobs): 00:31:24.859 READ: bw=1475KiB/s (1510kB/s), 736KiB/s-739KiB/s (754kB/s-757kB/s), io=14.4MiB (15.1MB), run=10001-10001msec 00:31:24.859 08:31:57 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:24.859 08:31:57 -- target/dif.sh@43 -- # local sub 00:31:24.859 08:31:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.859 08:31:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.859 08:31:57 -- target/dif.sh@36 -- # local sub_id=0 00:31:24.859 08:31:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.859 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.859 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.860 08:31:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:24.860 08:31:57 -- target/dif.sh@36 -- # local sub_id=1 00:31:24.860 08:31:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 00:31:24.860 real 0m11.196s 00:31:24.860 user 0m26.123s 00:31:24.860 sys 0m0.692s 00:31:24.860 08:31:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 ************************************ 00:31:24.860 END TEST fio_dif_1_multi_subsystems 00:31:24.860 ************************************ 00:31:24.860 08:31:57 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:24.860 08:31:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:24.860 08:31:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 ************************************ 00:31:24.860 START TEST fio_dif_rand_params 00:31:24.860 ************************************ 00:31:24.860 08:31:57 -- common/autotest_common.sh@1102 -- # fio_dif_rand_params 00:31:24.860 08:31:57 -- target/dif.sh@100 -- # local NULL_DIF 00:31:24.860 08:31:57 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:24.860 08:31:57 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:24.860 08:31:57 -- target/dif.sh@103 -- # bs=128k 00:31:24.860 08:31:57 -- target/dif.sh@103 -- # numjobs=3 00:31:24.860 08:31:57 -- target/dif.sh@103 -- # iodepth=3 00:31:24.860 08:31:57 -- target/dif.sh@103 -- # runtime=5 00:31:24.860 08:31:57 -- target/dif.sh@105 -- # create_subsystems 0 00:31:24.860 08:31:57 -- target/dif.sh@28 -- # local sub 00:31:24.860 08:31:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.860 08:31:57 -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.860 08:31:57 -- target/dif.sh@18 -- # local sub_id=0 00:31:24.860 08:31:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 bdev_null0 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.860 08:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 08:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.860 08:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.860 08:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:24.860 [2024-02-13 08:31:58.010991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.860 08:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.860 08:31:58 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:24.860 08:31:58 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:24.860 08:31:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:24.860 08:31:58 -- nvmf/common.sh@520 -- # config=() 00:31:24.860 08:31:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.860 08:31:58 -- nvmf/common.sh@520 -- # local subsystem config 00:31:24.860 08:31:58 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.860 08:31:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:24.860 08:31:58 -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.860 08:31:58 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:24.860 08:31:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:24.860 { 00:31:24.860 "params": { 00:31:24.860 "name": "Nvme$subsystem", 00:31:24.860 "trtype": "$TEST_TRANSPORT", 00:31:24.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.860 "adrfam": "ipv4", 00:31:24.860 "trsvcid": "$NVMF_PORT", 00:31:24.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.860 "hdgst": ${hdgst:-false}, 00:31:24.860 "ddgst": ${ddgst:-false} 00:31:24.860 }, 00:31:24.860 "method": "bdev_nvme_attach_controller" 00:31:24.860 } 00:31:24.860 EOF 00:31:24.860 )") 00:31:24.860 08:31:58 -- target/dif.sh@54 -- # local file 00:31:24.860 08:31:58 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.860 08:31:58 -- target/dif.sh@56 -- # cat 00:31:24.860 08:31:58 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:24.860 08:31:58 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.860 08:31:58 -- common/autotest_common.sh@1318 -- # shift 00:31:24.860 08:31:58 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:24.860 08:31:58 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.860 08:31:58 -- nvmf/common.sh@542 -- # cat 00:31:24.860 08:31:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.860 08:31:58 -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:24.860 08:31:58 -- nvmf/common.sh@544 -- # jq . 00:31:24.860 08:31:58 -- nvmf/common.sh@545 -- # IFS=, 00:31:24.860 08:31:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:24.860 "params": { 00:31:24.860 "name": "Nvme0", 00:31:24.860 "trtype": "tcp", 00:31:24.860 "traddr": "10.0.0.2", 00:31:24.860 "adrfam": "ipv4", 00:31:24.860 "trsvcid": "4420", 00:31:24.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.860 "hdgst": false, 00:31:24.860 "ddgst": false 00:31:24.860 }, 00:31:24.860 "method": "bdev_nvme_attach_controller" 00:31:24.860 }' 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:24.860 08:31:58 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:24.860 08:31:58 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:24.860 08:31:58 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:24.860 08:31:58 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:24.860 08:31:58 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.860 08:31:58 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.860 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:24.860 ... 00:31:24.860 fio-3.35 00:31:24.860 Starting 3 threads 00:31:24.860 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.427 [2024-02-13 08:31:58.840734] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.427 [2024-02-13 08:31:58.840787] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:30.699 00:31:30.699 filename0: (groupid=0, jobs=1): err= 0: pid=2465205: Tue Feb 13 08:32:03 2024 00:31:30.699 read: IOPS=281, BW=35.2MiB/s (37.0MB/s)(176MiB/5004msec) 00:31:30.699 slat (nsec): min=4448, max=16869, avg=8697.75, stdev=2473.10 00:31:30.699 clat (usec): min=3739, max=90195, avg=10624.63, stdev=11895.19 00:31:30.699 lat (usec): min=3746, max=90207, avg=10633.33, stdev=11895.45 00:31:30.699 clat percentiles (usec): 00:31:30.699 | 1.00th=[ 4293], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5735], 00:31:30.699 | 30.00th=[ 6194], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7635], 00:31:30.699 | 70.00th=[ 8225], 80.00th=[ 9110], 90.00th=[11469], 95.00th=[48497], 00:31:30.699 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[90702], 00:31:30.699 | 99.99th=[90702] 00:31:30.699 bw ( KiB/s): min=25344, max=48896, per=39.02%, avg=35441.78, stdev=7783.95, samples=9 00:31:30.699 iops : min= 198, max= 382, avg=276.89, stdev=60.81, samples=9 00:31:30.699 lat (msec) : 4=0.28%, 10=85.05%, 20=6.45%, 50=4.61%, 100=3.61% 00:31:30.699 cpu : usr=95.00%, sys=4.56%, ctx=8, majf=0, minf=100 00:31:30.699 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 issued rwts: total=1411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.699 filename0: (groupid=0, jobs=1): err= 0: pid=2465206: Tue Feb 13 08:32:03 2024 00:31:30.699 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(185MiB/5025msec) 00:31:30.699 slat (nsec): min=5935, max=25885, avg=8659.20, stdev=2672.11 00:31:30.699 clat (usec): min=3799, max=91956, avg=10169.81, stdev=11273.28 00:31:30.699 lat (usec): min=3806, max=91967, avg=10178.47, stdev=11273.51 00:31:30.699 clat percentiles (usec): 00:31:30.699 | 1.00th=[ 4359], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 5800], 00:31:30.699 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7635], 00:31:30.699 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[10683], 95.00th=[48497], 00:31:30.699 | 99.00th=[52167], 99.50th=[53740], 99.90th=[90702], 99.95th=[91751], 00:31:30.699 | 99.99th=[91751] 00:31:30.699 bw ( KiB/s): min=25600, max=55552, per=41.63%, avg=37811.20, stdev=8154.63, samples=10 00:31:30.699 iops : min= 200, max= 434, avg=295.40, stdev=63.71, samples=10 00:31:30.699 lat (msec) : 4=0.07%, 10=87.91%, 20=5.07%, 50=4.39%, 100=2.57% 00:31:30.699 cpu : usr=94.51%, sys=5.04%, ctx=8, majf=0, minf=117 00:31:30.699 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.699 filename0: (groupid=0, jobs=1): err= 0: pid=2465207: Tue Feb 13 08:32:03 2024 00:31:30.699 read: IOPS=134, BW=16.8MiB/s (17.7MB/s)(84.4MiB/5012msec) 00:31:30.699 slat (nsec): min=5955, max=24504, avg=9750.98, stdev=2543.24 00:31:30.699 clat (usec): min=5633, max=96273, avg=22262.92, stdev=20053.30 00:31:30.699 lat (usec): min=5642, max=96285, avg=22272.67, stdev=20053.29 00:31:30.699 clat percentiles (usec): 00:31:30.699 | 1.00th=[ 6390], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9634], 00:31:30.699 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12125], 60.00th=[12911], 00:31:30.699 | 70.00th=[14222], 80.00th=[51119], 90.00th=[53740], 95.00th=[55313], 00:31:30.699 | 99.00th=[93848], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:31:30.699 | 99.99th=[95945] 00:31:30.699 bw ( KiB/s): min=11776, max=26880, per=18.94%, avg=17203.20, stdev=4199.23, samples=10 00:31:30.699 iops : min= 92, max= 210, avg=134.40, stdev=32.81, samples=10 00:31:30.699 lat (msec) : 10=24.30%, 20=50.52%, 50=1.04%, 100=24.15% 00:31:30.699 cpu : usr=96.49%, sys=3.19%, ctx=6, majf=0, minf=73 00:31:30.699 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.699 issued rwts: total=675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.699 00:31:30.699 Run status group 0 (all jobs): 00:31:30.699 READ: bw=88.7MiB/s (93.0MB/s), 16.8MiB/s-36.8MiB/s (17.7MB/s-38.6MB/s), io=446MiB (467MB), run=5004-5025msec 00:31:30.699 08:32:04 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:30.699 08:32:04 -- target/dif.sh@43 -- # local sub 00:31:30.699 08:32:04 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.699 08:32:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.699 08:32:04 -- target/dif.sh@36 -- # local sub_id=0 00:31:30.699 08:32:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # bs=4k 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # numjobs=8 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # iodepth=16 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # runtime= 00:31:30.699 08:32:04 -- target/dif.sh@109 -- # files=2 00:31:30.699 08:32:04 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:30.699 08:32:04 -- target/dif.sh@28 -- # local sub 00:31:30.699 08:32:04 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.699 08:32:04 -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.699 08:32:04 -- target/dif.sh@18 -- # local sub_id=0 00:31:30.699 08:32:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 bdev_null0 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 [2024-02-13 08:32:04.223128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.699 08:32:04 -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.699 08:32:04 -- target/dif.sh@18 -- # local sub_id=1 00:31:30.699 08:32:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 bdev_null1 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.699 08:32:04 -- target/dif.sh@31 -- # create_subsystem 2 00:31:30.699 08:32:04 -- target/dif.sh@18 -- # local sub_id=2 00:31:30.699 08:32:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 bdev_null2 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.699 08:32:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:30.699 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.699 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.699 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.700 08:32:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:30.700 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.700 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.700 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.700 08:32:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:30.700 08:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.700 08:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:30.700 08:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.700 08:32:04 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:30.700 08:32:04 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:30.700 08:32:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:30.700 08:32:04 -- nvmf/common.sh@520 -- # config=() 00:31:30.700 08:32:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.700 08:32:04 -- nvmf/common.sh@520 -- # local subsystem config 00:31:30.700 08:32:04 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.700 08:32:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.700 08:32:04 -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.700 08:32:04 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.700 { 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme$subsystem", 00:31:30.700 "trtype": "$TEST_TRANSPORT", 00:31:30.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "$NVMF_PORT", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.700 "hdgst": ${hdgst:-false}, 00:31:30.700 "ddgst": ${ddgst:-false} 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 } 00:31:30.700 EOF 00:31:30.700 )") 00:31:30.700 08:32:04 -- target/dif.sh@54 -- # local file 00:31:30.700 08:32:04 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.700 08:32:04 -- target/dif.sh@56 -- # cat 00:31:30.700 08:32:04 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:30.700 08:32:04 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.700 08:32:04 -- common/autotest_common.sh@1318 -- # shift 00:31:30.700 08:32:04 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:30.700 08:32:04 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # cat 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.700 08:32:04 -- target/dif.sh@73 -- # cat 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:30.700 08:32:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.700 { 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme$subsystem", 00:31:30.700 "trtype": "$TEST_TRANSPORT", 00:31:30.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "$NVMF_PORT", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.700 "hdgst": ${hdgst:-false}, 00:31:30.700 "ddgst": ${ddgst:-false} 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 } 00:31:30.700 EOF 00:31:30.700 )") 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file++ )) 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # cat 00:31:30.700 08:32:04 -- target/dif.sh@73 -- # cat 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file++ )) 00:31:30.700 08:32:04 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.700 08:32:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.700 { 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme$subsystem", 00:31:30.700 "trtype": "$TEST_TRANSPORT", 00:31:30.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "$NVMF_PORT", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.700 "hdgst": ${hdgst:-false}, 00:31:30.700 "ddgst": ${ddgst:-false} 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 } 00:31:30.700 EOF 00:31:30.700 )") 00:31:30.700 08:32:04 -- nvmf/common.sh@542 -- # cat 00:31:30.700 08:32:04 -- nvmf/common.sh@544 -- # jq . 00:31:30.700 08:32:04 -- nvmf/common.sh@545 -- # IFS=, 00:31:30.700 08:32:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme0", 00:31:30.700 "trtype": "tcp", 00:31:30.700 "traddr": "10.0.0.2", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "4420", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.700 "hdgst": false, 00:31:30.700 "ddgst": false 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 },{ 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme1", 00:31:30.700 "trtype": "tcp", 00:31:30.700 "traddr": "10.0.0.2", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "4420", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.700 "hdgst": false, 00:31:30.700 "ddgst": false 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 },{ 00:31:30.700 "params": { 00:31:30.700 "name": "Nvme2", 00:31:30.700 "trtype": "tcp", 00:31:30.700 "traddr": "10.0.0.2", 00:31:30.700 "adrfam": "ipv4", 00:31:30.700 "trsvcid": "4420", 00:31:30.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:30.700 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:30.700 "hdgst": false, 00:31:30.700 "ddgst": false 00:31:30.700 }, 00:31:30.700 "method": "bdev_nvme_attach_controller" 00:31:30.700 }' 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:30.700 08:32:04 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:30.700 08:32:04 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:30.700 08:32:04 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:30.700 08:32:04 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:30.700 08:32:04 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.700 08:32:04 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.267 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.267 ... 00:31:31.267 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.267 ... 00:31:31.267 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:31.267 ... 00:31:31.267 fio-3.35 00:31:31.267 Starting 24 threads 00:31:31.267 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.835 [2024-02-13 08:32:05.341655] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:31.835 [2024-02-13 08:32:05.341706] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:44.044 00:31:44.044 filename0: (groupid=0, jobs=1): err= 0: pid=2466425: Tue Feb 13 08:32:15 2024 00:31:44.044 read: IOPS=542, BW=2169KiB/s (2221kB/s)(21.2MiB/10029msec) 00:31:44.044 slat (usec): min=5, max=104, avg=38.47, stdev=22.81 00:31:44.044 clat (msec): min=12, max=488, avg=29.22, stdev=35.80 00:31:44.044 lat (msec): min=12, max=488, avg=29.26, stdev=35.80 00:31:44.044 clat percentiles (msec): 00:31:44.044 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.044 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.044 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 31], 95.00th=[ 34], 00:31:44.044 | 99.00th=[ 176], 99.50th=[ 368], 99.90th=[ 489], 99.95th=[ 489], 00:31:44.044 | 99.99th=[ 489] 00:31:44.044 bw ( KiB/s): min= 128, max= 2784, per=4.15%, avg=2156.11, stdev=788.16, samples=19 00:31:44.044 iops : min= 32, max= 696, avg=538.95, stdev=197.01, samples=19 00:31:44.044 lat (msec) : 20=3.77%, 50=94.50%, 100=0.55%, 250=0.29%, 500=0.88% 00:31:44.044 cpu : usr=98.83%, sys=0.63%, ctx=21, majf=0, minf=34 00:31:44.044 IO depths : 1=1.7%, 2=3.6%, 4=15.2%, 8=67.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:44.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.044 complete : 0=0.0%, 4=92.4%, 8=2.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.044 issued rwts: total=5438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.044 filename0: (groupid=0, jobs=1): err= 0: pid=2466426: Tue Feb 13 08:32:15 2024 00:31:44.044 read: IOPS=521, BW=2087KiB/s (2137kB/s)(20.4MiB/10008msec) 00:31:44.044 slat (usec): min=6, max=136, avg=34.81, stdev=23.24 00:31:44.044 clat (msec): min=10, max=469, avg=30.43, stdev=37.85 00:31:44.044 lat (msec): min=10, max=469, avg=30.46, stdev=37.85 00:31:44.044 clat percentiles (msec): 00:31:44.044 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.044 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.045 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 34], 95.00th=[ 38], 00:31:44.045 | 99.00th=[ 192], 99.50th=[ 388], 99.90th=[ 468], 99.95th=[ 468], 00:31:44.045 | 99.99th=[ 468] 00:31:44.045 bw ( KiB/s): min= 128, max= 2816, per=3.99%, avg=2076.47, stdev=775.04, samples=19 00:31:44.045 iops : min= 32, max= 704, avg=519.05, stdev=193.73, samples=19 00:31:44.045 lat (msec) : 20=3.62%, 50=94.85%, 100=0.31%, 250=0.31%, 500=0.92% 00:31:44.045 cpu : usr=99.01%, sys=0.58%, ctx=26, majf=0, minf=27 00:31:44.045 IO depths : 1=1.9%, 2=4.8%, 4=16.1%, 8=65.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=92.5%, 8=2.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466427: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=522, BW=2092KiB/s (2142kB/s)(20.4MiB/10005msec) 00:31:44.045 slat (nsec): min=4443, max=91296, avg=28072.12, stdev=17565.82 00:31:44.045 clat (msec): min=5, max=471, avg=30.45, stdev=40.20 00:31:44.045 lat (msec): min=5, max=471, avg=30.48, stdev=40.20 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.045 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 35], 95.00th=[ 38], 00:31:44.045 | 99.00th=[ 74], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:31:44.045 | 99.99th=[ 472] 00:31:44.045 bw ( KiB/s): min= 128, max= 2688, per=3.99%, avg=2073.53, stdev=774.92, samples=19 00:31:44.045 iops : min= 32, max= 672, avg=518.37, stdev=193.72, samples=19 00:31:44.045 lat (msec) : 10=0.50%, 20=4.24%, 50=93.92%, 100=0.42%, 500=0.92% 00:31:44.045 cpu : usr=98.88%, sys=0.66%, ctx=25, majf=0, minf=36 00:31:44.045 IO depths : 1=0.4%, 2=1.1%, 4=9.3%, 8=74.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=91.1%, 8=5.4%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466428: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=556, BW=2225KiB/s (2278kB/s)(21.7MiB/10007msec) 00:31:44.045 slat (usec): min=4, max=163, avg=39.63, stdev=25.11 00:31:44.045 clat (msec): min=8, max=576, avg=28.36, stdev=38.39 00:31:44.045 lat (msec): min=8, max=576, avg=28.40, stdev=38.39 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.045 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:31:44.045 | 99.00th=[ 174], 99.50th=[ 384], 99.90th=[ 575], 99.95th=[ 575], 00:31:44.045 | 99.99th=[ 575] 00:31:44.045 bw ( KiB/s): min= 128, max= 2688, per=4.26%, avg=2215.47, stdev=845.07, samples=19 00:31:44.045 iops : min= 32, max= 672, avg=553.84, stdev=211.32, samples=19 00:31:44.045 lat (msec) : 10=0.13%, 20=0.52%, 50=98.20%, 100=0.04%, 250=0.25% 00:31:44.045 lat (msec) : 500=0.57%, 750=0.29% 00:31:44.045 cpu : usr=97.65%, sys=1.09%, ctx=34, majf=0, minf=30 00:31:44.045 IO depths : 1=5.3%, 2=11.5%, 4=24.8%, 8=51.2%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466430: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=559, BW=2237KiB/s (2291kB/s)(21.9MiB/10013msec) 00:31:44.045 slat (usec): min=4, max=112, avg=37.75, stdev=22.60 00:31:44.045 clat (msec): min=2, max=528, avg=28.31, stdev=34.92 00:31:44.045 lat (msec): min=2, max=528, avg=28.35, stdev=34.92 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.045 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:31:44.045 | 99.00th=[ 243], 99.50th=[ 359], 99.90th=[ 477], 99.95th=[ 531], 00:31:44.045 | 99.99th=[ 531] 00:31:44.045 bw ( KiB/s): min= 207, max= 3032, per=4.29%, avg=2230.05, stdev=814.24, samples=19 00:31:44.045 iops : min= 51, max= 758, avg=557.47, stdev=203.66, samples=19 00:31:44.045 lat (msec) : 4=0.29%, 10=0.77%, 20=0.93%, 50=96.68%, 100=0.20% 00:31:44.045 lat (msec) : 250=0.29%, 500=0.77%, 750=0.09% 00:31:44.045 cpu : usr=99.19%, sys=0.42%, ctx=15, majf=0, minf=30 00:31:44.045 IO depths : 1=3.2%, 2=8.0%, 4=21.9%, 8=57.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466431: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10007msec) 00:31:44.045 slat (nsec): min=7655, max=90755, avg=41165.90, stdev=15710.69 00:31:44.045 clat (msec): min=14, max=662, avg=28.40, stdev=35.90 00:31:44.045 lat (msec): min=14, max=662, avg=28.44, stdev=35.90 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.045 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:31:44.045 | 99.00th=[ 176], 99.50th=[ 368], 99.90th=[ 489], 99.95th=[ 489], 00:31:44.045 | 99.99th=[ 667] 00:31:44.045 bw ( KiB/s): min= 128, max= 2688, per=4.26%, avg=2215.79, stdev=815.17, samples=19 00:31:44.045 iops : min= 32, max= 672, avg=553.89, stdev=203.77, samples=19 00:31:44.045 lat (msec) : 20=0.04%, 50=98.28%, 100=0.54%, 250=0.32%, 500=0.79% 00:31:44.045 lat (msec) : 750=0.04% 00:31:44.045 cpu : usr=96.61%, sys=1.67%, ctx=134, majf=0, minf=40 00:31:44.045 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466432: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10004msec) 00:31:44.045 slat (nsec): min=4552, max=91727, avg=28743.33, stdev=17952.06 00:31:44.045 clat (msec): min=5, max=576, avg=30.37, stdev=40.81 00:31:44.045 lat (msec): min=5, max=576, avg=30.40, stdev=40.81 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.045 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 39], 00:31:44.045 | 99.00th=[ 178], 99.50th=[ 477], 99.90th=[ 575], 99.95th=[ 575], 00:31:44.045 | 99.99th=[ 575] 00:31:44.045 bw ( KiB/s): min= 128, max= 2784, per=3.99%, avg=2075.21, stdev=801.40, samples=19 00:31:44.045 iops : min= 32, max= 696, avg=518.79, stdev=200.34, samples=19 00:31:44.045 lat (msec) : 10=0.61%, 20=6.25%, 50=91.82%, 100=0.10%, 250=0.53% 00:31:44.045 lat (msec) : 500=0.38%, 750=0.31% 00:31:44.045 cpu : usr=98.23%, sys=1.04%, ctx=204, majf=0, minf=35 00:31:44.045 IO depths : 1=0.5%, 2=1.2%, 4=8.8%, 8=75.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=90.6%, 8=6.0%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename0: (groupid=0, jobs=1): err= 0: pid=2466433: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10007msec) 00:31:44.045 slat (nsec): min=7671, max=92396, avg=37815.49, stdev=16337.70 00:31:44.045 clat (msec): min=14, max=636, avg=28.45, stdev=39.82 00:31:44.045 lat (msec): min=14, max=636, avg=28.49, stdev=39.82 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.045 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:31:44.045 | 99.00th=[ 70], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.045 | 99.99th=[ 634] 00:31:44.045 bw ( KiB/s): min= 128, max= 2816, per=4.26%, avg=2215.79, stdev=831.75, samples=19 00:31:44.045 iops : min= 32, max= 704, avg=553.89, stdev=207.91, samples=19 00:31:44.045 lat (msec) : 20=0.13%, 50=98.67%, 100=0.34%, 500=0.57%, 750=0.29% 00:31:44.045 cpu : usr=98.70%, sys=0.86%, ctx=60, majf=0, minf=30 00:31:44.045 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:44.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.045 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.045 filename1: (groupid=0, jobs=1): err= 0: pid=2466434: Tue Feb 13 08:32:15 2024 00:31:44.045 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:31:44.045 slat (usec): min=6, max=113, avg=38.47, stdev=21.64 00:31:44.045 clat (msec): min=10, max=576, avg=29.99, stdev=40.90 00:31:44.045 lat (msec): min=10, max=576, avg=30.03, stdev=40.90 00:31:44.045 clat percentiles (msec): 00:31:44.045 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.045 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.045 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 33], 95.00th=[ 37], 00:31:44.045 | 99.00th=[ 176], 99.50th=[ 477], 99.90th=[ 575], 99.95th=[ 575], 00:31:44.045 | 99.99th=[ 575] 00:31:44.045 bw ( KiB/s): min= 128, max= 2688, per=4.05%, avg=2104.32, stdev=785.93, samples=19 00:31:44.046 iops : min= 32, max= 672, avg=526.00, stdev=196.45, samples=19 00:31:44.046 lat (msec) : 20=3.28%, 50=95.20%, 100=0.30%, 250=0.60%, 500=0.30% 00:31:44.046 lat (msec) : 750=0.30% 00:31:44.046 cpu : usr=98.13%, sys=0.98%, ctx=22, majf=0, minf=42 00:31:44.046 IO depths : 1=1.4%, 2=3.4%, 4=12.4%, 8=69.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=91.7%, 8=4.6%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466435: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=557, BW=2231KiB/s (2284kB/s)(21.8MiB/10006msec) 00:31:44.046 slat (usec): min=5, max=114, avg=37.51, stdev=23.01 00:31:44.046 clat (msec): min=6, max=637, avg=28.34, stdev=39.87 00:31:44.046 lat (msec): min=6, max=637, avg=28.38, stdev=39.87 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 23], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.046 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 28], 00:31:44.046 | 99.00th=[ 73], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.046 | 99.99th=[ 642] 00:31:44.046 bw ( KiB/s): min= 128, max= 2736, per=4.27%, avg=2218.00, stdev=844.92, samples=19 00:31:44.046 iops : min= 32, max= 684, avg=554.47, stdev=211.28, samples=19 00:31:44.046 lat (msec) : 10=0.22%, 20=1.08%, 50=97.60%, 100=0.25%, 500=0.57% 00:31:44.046 lat (msec) : 750=0.29% 00:31:44.046 cpu : usr=98.30%, sys=0.80%, ctx=67, majf=0, minf=36 00:31:44.046 IO depths : 1=2.6%, 2=8.8%, 4=24.7%, 8=54.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466436: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=537, BW=2151KiB/s (2202kB/s)(21.0MiB/10017msec) 00:31:44.046 slat (nsec): min=3488, max=87581, avg=27205.40, stdev=17734.97 00:31:44.046 clat (msec): min=10, max=358, avg=29.60, stdev=32.61 00:31:44.046 lat (msec): min=10, max=358, avg=29.62, stdev=32.61 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.046 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 33], 95.00th=[ 36], 00:31:44.046 | 99.00th=[ 300], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 359], 00:31:44.046 | 99.99th=[ 359] 00:31:44.046 bw ( KiB/s): min= 176, max= 2816, per=4.12%, avg=2141.63, stdev=799.35, samples=19 00:31:44.046 iops : min= 44, max= 704, avg=535.32, stdev=199.86, samples=19 00:31:44.046 lat (msec) : 20=5.79%, 50=92.42%, 100=0.30%, 250=0.48%, 500=1.00% 00:31:44.046 cpu : usr=98.74%, sys=0.83%, ctx=16, majf=0, minf=33 00:31:44.046 IO depths : 1=0.7%, 2=1.7%, 4=9.8%, 8=73.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=91.0%, 8=5.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466437: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=558, BW=2233KiB/s (2286kB/s)(21.8MiB/10004msec) 00:31:44.046 slat (usec): min=6, max=106, avg=34.96, stdev=20.66 00:31:44.046 clat (msec): min=10, max=636, avg=28.36, stdev=39.79 00:31:44.046 lat (msec): min=10, max=636, avg=28.39, stdev=39.79 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.046 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:31:44.046 | 99.00th=[ 71], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.046 | 99.99th=[ 634] 00:31:44.046 bw ( KiB/s): min= 128, max= 2816, per=4.28%, avg=2222.32, stdev=822.03, samples=19 00:31:44.046 iops : min= 32, max= 704, avg=555.53, stdev=205.49, samples=19 00:31:44.046 lat (msec) : 20=1.56%, 50=97.33%, 100=0.25%, 500=0.57%, 750=0.29% 00:31:44.046 cpu : usr=98.38%, sys=0.81%, ctx=27, majf=0, minf=65 00:31:44.046 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466438: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10007msec) 00:31:44.046 slat (nsec): min=6971, max=90142, avg=34002.37, stdev=17015.34 00:31:44.046 clat (msec): min=14, max=655, avg=28.49, stdev=40.10 00:31:44.046 lat (msec): min=14, max=655, avg=28.52, stdev=40.10 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.046 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:31:44.046 | 99.00th=[ 71], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.046 | 99.99th=[ 659] 00:31:44.046 bw ( KiB/s): min= 128, max= 2816, per=4.26%, avg=2215.79, stdev=831.75, samples=19 00:31:44.046 iops : min= 32, max= 704, avg=553.89, stdev=207.91, samples=19 00:31:44.046 lat (msec) : 20=0.11%, 50=98.71%, 100=0.32%, 250=0.04%, 500=0.50% 00:31:44.046 lat (msec) : 750=0.32% 00:31:44.046 cpu : usr=98.96%, sys=0.60%, ctx=61, majf=0, minf=33 00:31:44.046 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466439: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=519, BW=2080KiB/s (2130kB/s)(20.3MiB/10004msec) 00:31:44.046 slat (nsec): min=4625, max=90732, avg=27206.72, stdev=17559.42 00:31:44.046 clat (msec): min=5, max=527, avg=30.63, stdev=39.71 00:31:44.046 lat (msec): min=5, max=527, avg=30.66, stdev=39.71 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.046 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 38], 00:31:44.046 | 99.00th=[ 176], 99.50th=[ 368], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.046 | 99.99th=[ 527] 00:31:44.046 bw ( KiB/s): min= 128, max= 2720, per=3.97%, avg=2061.32, stdev=779.00, samples=19 00:31:44.046 iops : min= 32, max= 680, avg=515.32, stdev=194.74, samples=19 00:31:44.046 lat (msec) : 10=0.52%, 20=4.15%, 50=94.21%, 250=0.19%, 500=0.81% 00:31:44.046 lat (msec) : 750=0.12% 00:31:44.046 cpu : usr=98.69%, sys=0.76%, ctx=80, majf=0, minf=39 00:31:44.046 IO depths : 1=0.1%, 2=0.7%, 4=8.8%, 8=75.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=90.6%, 8=6.0%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466440: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10005msec) 00:31:44.046 slat (nsec): min=6412, max=82528, avg=25780.65, stdev=16131.67 00:31:44.046 clat (msec): min=10, max=680, avg=29.77, stdev=40.77 00:31:44.046 lat (msec): min=10, max=680, avg=29.79, stdev=40.78 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.046 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 33], 95.00th=[ 37], 00:31:44.046 | 99.00th=[ 176], 99.50th=[ 384], 99.90th=[ 684], 99.95th=[ 684], 00:31:44.046 | 99.99th=[ 684] 00:31:44.046 bw ( KiB/s): min= 128, max= 2784, per=4.10%, avg=2130.68, stdev=797.76, samples=19 00:31:44.046 iops : min= 32, max= 696, avg=532.63, stdev=199.48, samples=19 00:31:44.046 lat (msec) : 20=3.89%, 50=94.88%, 100=0.19%, 250=0.15%, 500=0.60% 00:31:44.046 lat (msec) : 750=0.30% 00:31:44.046 cpu : usr=98.89%, sys=0.67%, ctx=10, majf=0, minf=32 00:31:44.046 IO depths : 1=0.3%, 2=1.9%, 4=10.5%, 8=72.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 complete : 0=0.0%, 4=91.3%, 8=5.7%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.046 issued rwts: total=5352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.046 filename1: (groupid=0, jobs=1): err= 0: pid=2466441: Tue Feb 13 08:32:15 2024 00:31:44.046 read: IOPS=551, BW=2207KiB/s (2260kB/s)(21.6MiB/10009msec) 00:31:44.046 slat (nsec): min=6388, max=91195, avg=31928.97, stdev=18829.72 00:31:44.046 clat (msec): min=11, max=488, avg=28.76, stdev=33.80 00:31:44.046 lat (msec): min=11, max=488, avg=28.79, stdev=33.80 00:31:44.046 clat percentiles (msec): 00:31:44.046 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.046 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.046 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 32], 00:31:44.046 | 99.00th=[ 262], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:31:44.046 | 99.99th=[ 489] 00:31:44.046 bw ( KiB/s): min= 240, max= 2688, per=4.23%, avg=2196.42, stdev=807.23, samples=19 00:31:44.046 iops : min= 60, max= 672, avg=549.05, stdev=201.78, samples=19 00:31:44.046 lat (msec) : 20=2.61%, 50=95.69%, 100=0.54%, 250=0.04%, 500=1.12% 00:31:44.046 cpu : usr=98.50%, sys=0.95%, ctx=73, majf=0, minf=31 00:31:44.046 IO depths : 1=3.2%, 2=6.4%, 4=15.5%, 8=63.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:44.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=92.2%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466443: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10013msec) 00:31:44.047 slat (usec): min=2, max=180, avg=29.28, stdev=23.07 00:31:44.047 clat (usec): min=1798, max=513426, avg=27830.23, stdev=34224.48 00:31:44.047 lat (usec): min=1804, max=513473, avg=27859.50, stdev=34224.33 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 3], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.047 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 29], 95.00th=[ 33], 00:31:44.047 | 99.00th=[ 226], 99.50th=[ 359], 99.90th=[ 418], 99.95th=[ 418], 00:31:44.047 | 99.99th=[ 514] 00:31:44.047 bw ( KiB/s): min= 176, max= 3328, per=4.38%, avg=2274.21, stdev=851.15, samples=19 00:31:44.047 iops : min= 44, max= 832, avg=568.47, stdev=212.88, samples=19 00:31:44.047 lat (msec) : 2=0.37%, 4=0.96%, 10=0.56%, 20=6.16%, 50=90.73% 00:31:44.047 lat (msec) : 100=0.10%, 250=0.17%, 500=0.91%, 750=0.03% 00:31:44.047 cpu : usr=98.94%, sys=0.67%, ctx=20, majf=0, minf=57 00:31:44.047 IO depths : 1=1.8%, 2=3.6%, 4=10.1%, 8=71.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=91.0%, 8=5.7%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466444: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10009msec) 00:31:44.047 slat (nsec): min=5911, max=90713, avg=27419.50, stdev=17038.59 00:31:44.047 clat (msec): min=9, max=576, avg=30.82, stdev=39.51 00:31:44.047 lat (msec): min=9, max=576, avg=30.84, stdev=39.51 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.047 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 39], 00:31:44.047 | 99.00th=[ 178], 99.50th=[ 384], 99.90th=[ 575], 99.95th=[ 575], 00:31:44.047 | 99.99th=[ 575] 00:31:44.047 bw ( KiB/s): min= 128, max= 2688, per=3.93%, avg=2041.32, stdev=779.22, samples=19 00:31:44.047 iops : min= 32, max= 672, avg=510.26, stdev=194.84, samples=19 00:31:44.047 lat (msec) : 10=0.27%, 20=3.54%, 50=94.64%, 100=0.31%, 250=0.31% 00:31:44.047 lat (msec) : 500=0.66%, 750=0.27% 00:31:44.047 cpu : usr=98.75%, sys=0.77%, ctx=80, majf=0, minf=38 00:31:44.047 IO depths : 1=0.5%, 2=2.0%, 4=11.6%, 8=71.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=91.8%, 8=4.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466445: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=556, BW=2227KiB/s (2280kB/s)(21.8MiB/10026msec) 00:31:44.047 slat (nsec): min=7060, max=86905, avg=23031.80, stdev=15657.76 00:31:44.047 clat (msec): min=13, max=366, avg=28.53, stdev=32.16 00:31:44.047 lat (msec): min=13, max=366, avg=28.56, stdev=32.16 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 22], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 25], 00:31:44.047 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:31:44.047 | 99.00th=[ 174], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:31:44.047 | 99.99th=[ 368] 00:31:44.047 bw ( KiB/s): min= 128, max= 2688, per=4.28%, avg=2226.90, stdev=793.22, samples=20 00:31:44.047 iops : min= 32, max= 672, avg=556.70, stdev=198.29, samples=20 00:31:44.047 lat (msec) : 20=0.66%, 50=97.90%, 250=0.57%, 500=0.86% 00:31:44.047 cpu : usr=97.16%, sys=1.44%, ctx=26, majf=0, minf=43 00:31:44.047 IO depths : 1=2.7%, 2=8.8%, 4=24.7%, 8=54.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466446: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=545, BW=2181KiB/s (2233kB/s)(21.3MiB/10008msec) 00:31:44.047 slat (usec): min=6, max=114, avg=34.26, stdev=21.88 00:31:44.047 clat (msec): min=12, max=375, avg=29.15, stdev=32.57 00:31:44.047 lat (msec): min=12, max=375, avg=29.18, stdev=32.57 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.047 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 34], 00:31:44.047 | 99.00th=[ 288], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 376], 00:31:44.047 | 99.99th=[ 376] 00:31:44.047 bw ( KiB/s): min= 176, max= 2816, per=4.17%, avg=2168.95, stdev=794.38, samples=19 00:31:44.047 iops : min= 44, max= 704, avg=542.16, stdev=198.57, samples=19 00:31:44.047 lat (msec) : 20=3.90%, 50=94.34%, 100=0.29%, 250=0.40%, 500=1.06% 00:31:44.047 cpu : usr=98.96%, sys=0.63%, ctx=17, majf=0, minf=44 00:31:44.047 IO depths : 1=1.0%, 2=2.4%, 4=11.5%, 8=71.6%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=91.4%, 8=4.7%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466447: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=548, BW=2195KiB/s (2247kB/s)(21.5MiB/10013msec) 00:31:44.047 slat (usec): min=6, max=112, avg=33.99, stdev=22.60 00:31:44.047 clat (msec): min=10, max=670, avg=28.93, stdev=39.64 00:31:44.047 lat (msec): min=10, max=670, avg=28.96, stdev=39.64 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 23], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.047 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 30], 95.00th=[ 35], 00:31:44.047 | 99.00th=[ 87], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.047 | 99.99th=[ 667] 00:31:44.047 bw ( KiB/s): min= 128, max= 2816, per=4.20%, avg=2183.79, stdev=827.02, samples=19 00:31:44.047 iops : min= 32, max= 704, avg=545.89, stdev=206.73, samples=19 00:31:44.047 lat (msec) : 20=7.43%, 50=91.26%, 100=0.36%, 250=0.11%, 500=0.58% 00:31:44.047 lat (msec) : 750=0.25% 00:31:44.047 cpu : usr=98.95%, sys=0.61%, ctx=33, majf=0, minf=42 00:31:44.047 IO depths : 1=1.6%, 2=3.9%, 4=13.9%, 8=68.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=92.1%, 8=3.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466448: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10006msec) 00:31:44.047 slat (usec): min=5, max=112, avg=35.95, stdev=22.36 00:31:44.047 clat (msec): min=5, max=623, avg=29.79, stdev=38.79 00:31:44.047 lat (msec): min=5, max=623, avg=29.83, stdev=38.79 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.047 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 32], 95.00th=[ 37], 00:31:44.047 | 99.00th=[ 176], 99.50th=[ 376], 99.90th=[ 625], 99.95th=[ 625], 00:31:44.047 | 99.99th=[ 625] 00:31:44.047 bw ( KiB/s): min= 176, max= 2768, per=4.08%, avg=2118.42, stdev=793.92, samples=19 00:31:44.047 iops : min= 44, max= 692, avg=529.58, stdev=198.46, samples=19 00:31:44.047 lat (msec) : 10=0.49%, 20=2.87%, 50=95.34%, 100=0.30%, 250=0.11% 00:31:44.047 lat (msec) : 500=0.71%, 750=0.19% 00:31:44.047 cpu : usr=98.85%, sys=0.72%, ctx=31, majf=0, minf=36 00:31:44.047 IO depths : 1=0.6%, 2=1.7%, 4=9.5%, 8=74.0%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=91.0%, 8=5.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466449: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.1MiB/10013msec) 00:31:44.047 slat (nsec): min=6738, max=89658, avg=30003.90, stdev=17582.46 00:31:44.047 clat (msec): min=10, max=628, avg=29.54, stdev=37.68 00:31:44.047 lat (msec): min=10, max=628, avg=29.57, stdev=37.67 00:31:44.047 clat percentiles (msec): 00:31:44.047 | 1.00th=[ 15], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.047 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 25], 60.00th=[ 26], 00:31:44.047 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 32], 95.00th=[ 35], 00:31:44.047 | 99.00th=[ 161], 99.50th=[ 376], 99.90th=[ 625], 99.95th=[ 625], 00:31:44.047 | 99.99th=[ 625] 00:31:44.047 bw ( KiB/s): min= 160, max= 2688, per=4.12%, avg=2141.95, stdev=795.17, samples=19 00:31:44.047 iops : min= 40, max= 672, avg=535.42, stdev=198.76, samples=19 00:31:44.047 lat (msec) : 20=3.23%, 50=95.10%, 100=0.48%, 250=0.30%, 500=0.70% 00:31:44.047 lat (msec) : 750=0.19% 00:31:44.047 cpu : usr=91.34%, sys=4.23%, ctx=972, majf=0, minf=41 00:31:44.047 IO depths : 1=0.3%, 2=1.4%, 4=8.6%, 8=75.1%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:44.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 complete : 0=0.0%, 4=91.0%, 8=5.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.047 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.047 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.047 filename2: (groupid=0, jobs=1): err= 0: pid=2466450: Tue Feb 13 08:32:15 2024 00:31:44.047 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10005msec) 00:31:44.048 slat (nsec): min=6833, max=87885, avg=29540.04, stdev=17370.14 00:31:44.048 clat (msec): min=5, max=527, avg=30.44, stdev=39.68 00:31:44.048 lat (msec): min=5, max=527, avg=30.47, stdev=39.68 00:31:44.048 clat percentiles (msec): 00:31:44.048 | 1.00th=[ 13], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:31:44.048 | 30.00th=[ 24], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:31:44.048 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 35], 95.00th=[ 39], 00:31:44.048 | 99.00th=[ 184], 99.50th=[ 368], 99.90th=[ 527], 99.95th=[ 527], 00:31:44.048 | 99.99th=[ 527] 00:31:44.048 bw ( KiB/s): min= 128, max= 2688, per=3.99%, avg=2073.53, stdev=784.78, samples=19 00:31:44.048 iops : min= 32, max= 672, avg=518.37, stdev=196.19, samples=19 00:31:44.048 lat (msec) : 10=0.55%, 20=3.96%, 50=94.30%, 100=0.08%, 250=0.19% 00:31:44.048 lat (msec) : 500=0.80%, 750=0.11% 00:31:44.048 cpu : usr=98.23%, sys=1.10%, ctx=195, majf=0, minf=28 00:31:44.048 IO depths : 1=0.5%, 2=1.8%, 4=10.7%, 8=72.9%, 16=14.0%, 32=0.0%, >=64=0.0% 00:31:44.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.048 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.048 issued rwts: total=5230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.048 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:44.048 00:31:44.048 Run status group 0 (all jobs): 00:31:44.048 READ: bw=50.8MiB/s (53.2MB/s), 2066KiB/s-2283KiB/s (2115kB/s-2338kB/s), io=509MiB (534MB), run=10004-10029msec 00:31:44.048 08:32:15 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:44.048 08:32:15 -- target/dif.sh@43 -- # local sub 00:31:44.048 08:32:15 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.048 08:32:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:44.048 08:32:15 -- target/dif.sh@36 -- # local sub_id=0 00:31:44.048 08:32:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.048 08:32:15 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:44.048 08:32:15 -- target/dif.sh@36 -- # local sub_id=1 00:31:44.048 08:32:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.048 08:32:15 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:44.048 08:32:15 -- target/dif.sh@36 -- # local sub_id=2 00:31:44.048 08:32:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # numjobs=2 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # iodepth=8 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # runtime=5 00:31:44.048 08:32:15 -- target/dif.sh@115 -- # files=1 00:31:44.048 08:32:15 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:44.048 08:32:15 -- target/dif.sh@28 -- # local sub 00:31:44.048 08:32:15 -- target/dif.sh@30 -- # for sub in "$@" 00:31:44.048 08:32:15 -- target/dif.sh@31 -- # create_subsystem 0 00:31:44.048 08:32:15 -- target/dif.sh@18 -- # local sub_id=0 00:31:44.048 08:32:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 bdev_null0 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 [2024-02-13 08:32:15.928442] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@30 -- # for sub in "$@" 00:31:44.048 08:32:15 -- target/dif.sh@31 -- # create_subsystem 1 00:31:44.048 08:32:15 -- target/dif.sh@18 -- # local sub_id=1 00:31:44.048 08:32:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 bdev_null1 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.048 08:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.048 08:32:15 -- common/autotest_common.sh@10 -- # set +x 00:31:44.048 08:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.048 08:32:15 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:44.048 08:32:15 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:44.048 08:32:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:44.048 08:32:15 -- nvmf/common.sh@520 -- # config=() 00:31:44.048 08:32:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.048 08:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:31:44.048 08:32:15 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.048 08:32:15 -- target/dif.sh@82 -- # gen_fio_conf 00:31:44.048 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:44.048 08:32:15 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:44.048 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:44.048 { 00:31:44.048 "params": { 00:31:44.048 "name": "Nvme$subsystem", 00:31:44.048 "trtype": "$TEST_TRANSPORT", 00:31:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.048 "adrfam": "ipv4", 00:31:44.048 "trsvcid": "$NVMF_PORT", 00:31:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.048 "hdgst": ${hdgst:-false}, 00:31:44.048 "ddgst": ${ddgst:-false} 00:31:44.048 }, 00:31:44.048 "method": "bdev_nvme_attach_controller" 00:31:44.048 } 00:31:44.048 EOF 00:31:44.048 )") 00:31:44.048 08:32:15 -- target/dif.sh@54 -- # local file 00:31:44.048 08:32:15 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.048 08:32:15 -- target/dif.sh@56 -- # cat 00:31:44.048 08:32:15 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:44.048 08:32:15 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.048 08:32:15 -- common/autotest_common.sh@1318 -- # shift 00:31:44.048 08:32:15 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:44.048 08:32:15 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.048 08:32:15 -- nvmf/common.sh@542 -- # cat 00:31:44.048 08:32:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:44.048 08:32:15 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.048 08:32:15 -- target/dif.sh@72 -- # (( file <= files )) 00:31:44.048 08:32:15 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:44.048 08:32:15 -- target/dif.sh@73 -- # cat 00:31:44.048 08:32:15 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:44.048 08:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:44.048 08:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:44.048 { 00:31:44.048 "params": { 00:31:44.048 "name": "Nvme$subsystem", 00:31:44.048 "trtype": "$TEST_TRANSPORT", 00:31:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.048 "adrfam": "ipv4", 00:31:44.048 "trsvcid": "$NVMF_PORT", 00:31:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.048 "hdgst": ${hdgst:-false}, 00:31:44.048 "ddgst": ${ddgst:-false} 00:31:44.048 }, 00:31:44.049 "method": "bdev_nvme_attach_controller" 00:31:44.049 } 00:31:44.049 EOF 00:31:44.049 )") 00:31:44.049 08:32:15 -- target/dif.sh@72 -- # (( file++ )) 00:31:44.049 08:32:15 -- target/dif.sh@72 -- # (( file <= files )) 00:31:44.049 08:32:15 -- nvmf/common.sh@542 -- # cat 00:31:44.049 08:32:15 -- nvmf/common.sh@544 -- # jq . 00:31:44.049 08:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:31:44.049 08:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:44.049 "params": { 00:31:44.049 "name": "Nvme0", 00:31:44.049 "trtype": "tcp", 00:31:44.049 "traddr": "10.0.0.2", 00:31:44.049 "adrfam": "ipv4", 00:31:44.049 "trsvcid": "4420", 00:31:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.049 "hdgst": false, 00:31:44.049 "ddgst": false 00:31:44.049 }, 00:31:44.049 "method": "bdev_nvme_attach_controller" 00:31:44.049 },{ 00:31:44.049 "params": { 00:31:44.049 "name": "Nvme1", 00:31:44.049 "trtype": "tcp", 00:31:44.049 "traddr": "10.0.0.2", 00:31:44.049 "adrfam": "ipv4", 00:31:44.049 "trsvcid": "4420", 00:31:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.049 "hdgst": false, 00:31:44.049 "ddgst": false 00:31:44.049 }, 00:31:44.049 "method": "bdev_nvme_attach_controller" 00:31:44.049 }' 00:31:44.049 08:32:16 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:44.049 08:32:16 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:44.049 08:32:16 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.049 08:32:16 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.049 08:32:16 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:44.049 08:32:16 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:44.049 08:32:16 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:44.049 08:32:16 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:44.049 08:32:16 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:44.049 08:32:16 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.049 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:44.049 ... 00:31:44.049 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:44.049 ... 00:31:44.049 fio-3.35 00:31:44.049 Starting 4 threads 00:31:44.049 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.049 [2024-02-13 08:32:16.741027] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:44.049 [2024-02-13 08:32:16.741074] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:48.245 00:31:48.245 filename0: (groupid=0, jobs=1): err= 0: pid=2468326: Tue Feb 13 08:32:21 2024 00:31:48.245 read: IOPS=2878, BW=22.5MiB/s (23.6MB/s)(113MiB/5003msec) 00:31:48.245 slat (nsec): min=5946, max=49656, avg=9056.72, stdev=3376.95 00:31:48.245 clat (usec): min=1243, max=45534, avg=2754.70, stdev=2010.54 00:31:48.245 lat (usec): min=1249, max=45546, avg=2763.76, stdev=2010.54 00:31:48.245 clat percentiles (usec): 00:31:48.245 | 1.00th=[ 1631], 5.00th=[ 1860], 10.00th=[ 2008], 20.00th=[ 2147], 00:31:48.245 | 30.00th=[ 2343], 40.00th=[ 2474], 50.00th=[ 2638], 60.00th=[ 2769], 00:31:48.245 | 70.00th=[ 2933], 80.00th=[ 3130], 90.00th=[ 3392], 95.00th=[ 3621], 00:31:48.245 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[43254], 99.95th=[45351], 00:31:48.245 | 99.99th=[45351] 00:31:48.245 bw ( KiB/s): min=20576, max=25952, per=26.61%, avg=23063.00, stdev=1736.21, samples=9 00:31:48.245 iops : min= 2572, max= 3244, avg=2882.78, stdev=216.95, samples=9 00:31:48.245 lat (msec) : 2=9.44%, 4=88.82%, 10=1.51%, 50=0.22% 00:31:48.245 cpu : usr=95.86%, sys=3.32%, ctx=333, majf=0, minf=0 00:31:48.245 IO depths : 1=0.2%, 2=1.7%, 4=67.0%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 issued rwts: total=14400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.245 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.245 filename0: (groupid=0, jobs=1): err= 0: pid=2468327: Tue Feb 13 08:32:21 2024 00:31:48.245 read: IOPS=2888, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:31:48.245 slat (nsec): min=5985, max=34619, avg=8829.93, stdev=3035.60 00:31:48.245 clat (usec): min=1234, max=45745, avg=2747.39, stdev=2049.19 00:31:48.245 lat (usec): min=1241, max=45758, avg=2756.22, stdev=2049.20 00:31:48.245 clat percentiles (usec): 00:31:48.245 | 1.00th=[ 1614], 5.00th=[ 1827], 10.00th=[ 1991], 20.00th=[ 2147], 00:31:48.245 | 30.00th=[ 2311], 40.00th=[ 2474], 50.00th=[ 2606], 60.00th=[ 2769], 00:31:48.245 | 70.00th=[ 2933], 80.00th=[ 3097], 90.00th=[ 3359], 95.00th=[ 3621], 00:31:48.245 | 99.00th=[ 4293], 99.50th=[ 4883], 99.90th=[44303], 99.95th=[44827], 00:31:48.245 | 99.99th=[45876] 00:31:48.245 bw ( KiB/s): min=20592, max=26240, per=26.45%, avg=22918.56, stdev=1936.21, samples=9 00:31:48.245 iops : min= 2574, max= 3280, avg=2864.78, stdev=241.97, samples=9 00:31:48.245 lat (msec) : 2=10.54%, 4=87.53%, 10=1.70%, 50=0.22% 00:31:48.245 cpu : usr=97.26%, sys=2.36%, ctx=7, majf=0, minf=9 00:31:48.245 IO depths : 1=0.1%, 2=1.3%, 4=65.7%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 issued rwts: total=14443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.245 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.245 filename1: (groupid=0, jobs=1): err= 0: pid=2468328: Tue Feb 13 08:32:21 2024 00:31:48.245 read: IOPS=2196, BW=17.2MiB/s (18.0MB/s)(85.8MiB/5001msec) 00:31:48.245 slat (nsec): min=5909, max=40456, avg=8532.35, stdev=2915.75 00:31:48.245 clat (usec): min=933, max=48159, avg=3615.70, stdev=5633.09 00:31:48.245 lat (usec): min=940, max=48177, avg=3624.23, stdev=5633.12 00:31:48.245 clat percentiles (usec): 00:31:48.245 | 1.00th=[ 1012], 5.00th=[ 1729], 10.00th=[ 2073], 20.00th=[ 2343], 00:31:48.245 | 30.00th=[ 2573], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3032], 00:31:48.245 | 70.00th=[ 3195], 80.00th=[ 3425], 90.00th=[ 3720], 95.00th=[ 4047], 00:31:48.245 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[47973], 00:31:48.245 | 99.99th=[47973] 00:31:48.245 bw ( KiB/s): min=13072, max=22752, per=19.76%, avg=17126.89, stdev=3351.47, samples=9 00:31:48.245 iops : min= 1634, max= 2844, avg=2140.78, stdev=418.95, samples=9 00:31:48.245 lat (usec) : 1000=0.77% 00:31:48.245 lat (msec) : 2=7.70%, 4=86.20%, 10=3.50%, 50=1.82% 00:31:48.245 cpu : usr=97.42%, sys=2.20%, ctx=7, majf=0, minf=9 00:31:48.245 IO depths : 1=0.7%, 2=4.0%, 4=68.0%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 issued rwts: total=10986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.245 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.245 filename1: (groupid=0, jobs=1): err= 0: pid=2468330: Tue Feb 13 08:32:21 2024 00:31:48.245 read: IOPS=2871, BW=22.4MiB/s (23.5MB/s)(112MiB/5002msec) 00:31:48.245 slat (nsec): min=6138, max=59385, avg=9001.44, stdev=3092.72 00:31:48.245 clat (usec): min=938, max=46946, avg=2761.58, stdev=2080.91 00:31:48.245 lat (usec): min=945, max=46958, avg=2770.59, stdev=2080.91 00:31:48.245 clat percentiles (usec): 00:31:48.245 | 1.00th=[ 1483], 5.00th=[ 1762], 10.00th=[ 1926], 20.00th=[ 2114], 00:31:48.245 | 30.00th=[ 2311], 40.00th=[ 2474], 50.00th=[ 2606], 60.00th=[ 2769], 00:31:48.245 | 70.00th=[ 2966], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3752], 00:31:48.245 | 99.00th=[ 4359], 99.50th=[ 4621], 99.90th=[45351], 99.95th=[45876], 00:31:48.245 | 99.99th=[46924] 00:31:48.245 bw ( KiB/s): min=20624, max=26368, per=26.67%, avg=23113.78, stdev=1794.88, samples=9 00:31:48.245 iops : min= 2578, max= 3296, avg=2889.11, stdev=224.32, samples=9 00:31:48.245 lat (usec) : 1000=0.04% 00:31:48.245 lat (msec) : 2=12.54%, 4=84.88%, 10=2.32%, 50=0.22% 00:31:48.245 cpu : usr=96.96%, sys=2.70%, ctx=10, majf=0, minf=9 00:31:48.245 IO depths : 1=0.2%, 2=2.3%, 4=66.2%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.245 issued rwts: total=14364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.245 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.245 00:31:48.245 Run status group 0 (all jobs): 00:31:48.245 READ: bw=84.6MiB/s (88.7MB/s), 17.2MiB/s-22.6MiB/s (18.0MB/s-23.7MB/s), io=423MiB (444MB), run=5001-5003msec 00:31:48.506 08:32:22 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:48.506 08:32:22 -- target/dif.sh@43 -- # local sub 00:31:48.506 08:32:22 -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.506 08:32:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.506 08:32:22 -- target/dif.sh@36 -- # local sub_id=0 00:31:48.506 08:32:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.506 08:32:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:48.506 08:32:22 -- target/dif.sh@36 -- # local sub_id=1 00:31:48.506 08:32:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 00:31:48.506 real 0m24.128s 00:31:48.506 user 4m50.609s 00:31:48.506 sys 0m4.538s 00:31:48.506 08:32:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 ************************************ 00:31:48.506 END TEST fio_dif_rand_params 00:31:48.506 ************************************ 00:31:48.506 08:32:22 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:48.506 08:32:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:48.506 08:32:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 ************************************ 00:31:48.506 START TEST fio_dif_digest 00:31:48.506 ************************************ 00:31:48.506 08:32:22 -- common/autotest_common.sh@1102 -- # fio_dif_digest 00:31:48.506 08:32:22 -- target/dif.sh@123 -- # local NULL_DIF 00:31:48.506 08:32:22 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:48.506 08:32:22 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:48.506 08:32:22 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:48.506 08:32:22 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:48.506 08:32:22 -- target/dif.sh@127 -- # numjobs=3 00:31:48.506 08:32:22 -- target/dif.sh@127 -- # iodepth=3 00:31:48.506 08:32:22 -- target/dif.sh@127 -- # runtime=10 00:31:48.506 08:32:22 -- target/dif.sh@128 -- # hdgst=true 00:31:48.506 08:32:22 -- target/dif.sh@128 -- # ddgst=true 00:31:48.506 08:32:22 -- target/dif.sh@130 -- # create_subsystems 0 00:31:48.506 08:32:22 -- target/dif.sh@28 -- # local sub 00:31:48.506 08:32:22 -- target/dif.sh@30 -- # for sub in "$@" 00:31:48.506 08:32:22 -- target/dif.sh@31 -- # create_subsystem 0 00:31:48.506 08:32:22 -- target/dif.sh@18 -- # local sub_id=0 00:31:48.506 08:32:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 bdev_null0 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.506 08:32:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.506 08:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.506 08:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:48.506 [2024-02-13 08:32:22.191552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.766 08:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.766 08:32:22 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:48.766 08:32:22 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:48.766 08:32:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:48.766 08:32:22 -- nvmf/common.sh@520 -- # config=() 00:31:48.766 08:32:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.766 08:32:22 -- nvmf/common.sh@520 -- # local subsystem config 00:31:48.766 08:32:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:48.766 08:32:22 -- common/autotest_common.sh@1333 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.766 08:32:22 -- target/dif.sh@82 -- # gen_fio_conf 00:31:48.766 08:32:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:48.766 { 00:31:48.766 "params": { 00:31:48.766 "name": "Nvme$subsystem", 00:31:48.766 "trtype": "$TEST_TRANSPORT", 00:31:48.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.766 "adrfam": "ipv4", 00:31:48.766 "trsvcid": "$NVMF_PORT", 00:31:48.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.766 "hdgst": ${hdgst:-false}, 00:31:48.766 "ddgst": ${ddgst:-false} 00:31:48.766 }, 00:31:48.766 "method": "bdev_nvme_attach_controller" 00:31:48.766 } 00:31:48.766 EOF 00:31:48.766 )") 00:31:48.766 08:32:22 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:48.766 08:32:22 -- target/dif.sh@54 -- # local file 00:31:48.766 08:32:22 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:48.766 08:32:22 -- target/dif.sh@56 -- # cat 00:31:48.766 08:32:22 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:48.766 08:32:22 -- common/autotest_common.sh@1317 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.766 08:32:22 -- common/autotest_common.sh@1318 -- # shift 00:31:48.766 08:32:22 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:48.766 08:32:22 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.766 08:32:22 -- nvmf/common.sh@542 -- # cat 00:31:48.766 08:32:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:48.766 08:32:22 -- target/dif.sh@72 -- # (( file <= files )) 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:48.766 08:32:22 -- nvmf/common.sh@544 -- # jq . 00:31:48.766 08:32:22 -- nvmf/common.sh@545 -- # IFS=, 00:31:48.766 08:32:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:48.766 "params": { 00:31:48.766 "name": "Nvme0", 00:31:48.766 "trtype": "tcp", 00:31:48.766 "traddr": "10.0.0.2", 00:31:48.766 "adrfam": "ipv4", 00:31:48.766 "trsvcid": "4420", 00:31:48.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:48.766 "hdgst": true, 00:31:48.766 "ddgst": true 00:31:48.766 }, 00:31:48.766 "method": "bdev_nvme_attach_controller" 00:31:48.766 }' 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:48.766 08:32:22 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:48.766 08:32:22 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:48.766 08:32:22 -- common/autotest_common.sh@1322 -- # asan_lib= 00:31:48.766 08:32:22 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:31:48.766 08:32:22 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:48.766 08:32:22 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.040 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:49.040 ... 00:31:49.040 fio-3.35 00:31:49.040 Starting 3 threads 00:31:49.040 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.609 [2024-02-13 08:32:23.068665] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:49.609 [2024-02-13 08:32:23.068699] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:59.621 00:31:59.621 filename0: (groupid=0, jobs=1): err= 0: pid=2469515: Tue Feb 13 08:32:33 2024 00:31:59.621 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(388MiB/10046msec) 00:31:59.621 slat (nsec): min=6192, max=30131, avg=10273.01, stdev=2544.78 00:31:59.621 clat (usec): min=4435, max=93823, avg=9680.10, stdev=8513.31 00:31:59.621 lat (usec): min=4446, max=93836, avg=9690.38, stdev=8513.66 00:31:59.621 clat percentiles (usec): 00:31:59.621 | 1.00th=[ 4752], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 6063], 00:31:59.621 | 30.00th=[ 6652], 40.00th=[ 7504], 50.00th=[ 8356], 60.00th=[ 8979], 00:31:59.621 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[12387], 00:31:59.621 | 99.00th=[52691], 99.50th=[53216], 99.90th=[57410], 99.95th=[93848], 00:31:59.621 | 99.99th=[93848] 00:31:59.621 bw ( KiB/s): min=29952, max=53248, per=44.11%, avg=39718.40, stdev=6559.05, samples=20 00:31:59.621 iops : min= 234, max= 416, avg=310.30, stdev=51.24, samples=20 00:31:59.621 lat (msec) : 10=78.49%, 20=17.94%, 50=0.68%, 100=2.90% 00:31:59.621 cpu : usr=95.87%, sys=3.76%, ctx=27, majf=0, minf=107 00:31:59.621 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.621 issued rwts: total=3105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.621 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.621 filename0: (groupid=0, jobs=1): err= 0: pid=2469516: Tue Feb 13 08:32:33 2024 00:31:59.621 read: IOPS=150, BW=18.8MiB/s (19.7MB/s)(189MiB/10048msec) 00:31:59.621 slat (nsec): min=6215, max=34583, avg=11606.04, stdev=2871.16 00:31:59.621 clat (usec): min=4461, max=99896, avg=19864.82, stdev=16364.86 00:31:59.621 lat (usec): min=4468, max=99908, avg=19876.42, stdev=16365.18 00:31:59.621 clat percentiles (msec): 00:31:59.621 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:31:59.622 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:31:59.622 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 55], 95.00th=[ 57], 00:31:59.622 | 99.00th=[ 61], 99.50th=[ 72], 99.90th=[ 100], 99.95th=[ 101], 00:31:59.622 | 99.99th=[ 101] 00:31:59.622 bw ( KiB/s): min=12544, max=26880, per=21.49%, avg=19355.15, stdev=4015.00, samples=20 00:31:59.622 iops : min= 98, max= 210, avg=151.20, stdev=31.38, samples=20 00:31:59.622 lat (msec) : 10=17.50%, 20=65.65%, 50=1.98%, 100=14.86% 00:31:59.622 cpu : usr=96.87%, sys=2.78%, ctx=16, majf=0, minf=129 00:31:59.622 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.622 issued rwts: total=1514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.622 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.622 filename0: (groupid=0, jobs=1): err= 0: pid=2469517: Tue Feb 13 08:32:33 2024 00:31:59.622 read: IOPS=243, BW=30.5MiB/s (32.0MB/s)(306MiB/10046msec) 00:31:59.622 slat (nsec): min=6215, max=37700, avg=10700.57, stdev=2630.90 00:31:59.622 clat (usec): min=4312, max=93727, avg=12270.67, stdev=12229.49 00:31:59.622 lat (usec): min=4319, max=93736, avg=12281.37, stdev=12229.80 00:31:59.622 clat percentiles (usec): 00:31:59.622 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5538], 20.00th=[ 7242], 00:31:59.622 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:31:59.622 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12387], 95.00th=[51119], 00:31:59.622 | 99.00th=[53216], 99.50th=[54264], 99.90th=[92799], 99.95th=[92799], 00:31:59.622 | 99.99th=[93848] 00:31:59.622 bw ( KiB/s): min=23296, max=40192, per=34.80%, avg=31334.40, stdev=4294.48, samples=20 00:31:59.622 iops : min= 182, max= 314, avg=244.80, stdev=33.55, samples=20 00:31:59.622 lat (msec) : 10=64.08%, 20=28.04%, 50=0.78%, 100=7.10% 00:31:59.622 cpu : usr=96.49%, sys=3.15%, ctx=27, majf=0, minf=208 00:31:59.622 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.622 issued rwts: total=2450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.622 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.622 00:31:59.622 Run status group 0 (all jobs): 00:31:59.622 READ: bw=87.9MiB/s (92.2MB/s), 18.8MiB/s-38.6MiB/s (19.7MB/s-40.5MB/s), io=884MiB (927MB), run=10046-10048msec 00:31:59.882 08:32:33 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:59.882 08:32:33 -- target/dif.sh@43 -- # local sub 00:31:59.882 08:32:33 -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.882 08:32:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:59.882 08:32:33 -- target/dif.sh@36 -- # local sub_id=0 00:31:59.882 08:32:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.882 08:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.882 08:32:33 -- common/autotest_common.sh@10 -- # set +x 00:31:59.882 08:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.882 08:32:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:59.882 08:32:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.882 08:32:33 -- common/autotest_common.sh@10 -- # set +x 00:31:59.882 08:32:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.882 00:31:59.882 real 0m11.286s 00:31:59.882 user 0m35.964s 00:31:59.882 sys 0m1.331s 00:31:59.882 08:32:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:59.882 08:32:33 -- common/autotest_common.sh@10 -- # set +x 00:31:59.882 ************************************ 00:31:59.882 END TEST fio_dif_digest 00:31:59.882 ************************************ 00:31:59.882 08:32:33 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:59.882 08:32:33 -- target/dif.sh@147 -- # nvmftestfini 00:31:59.882 08:32:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:59.882 08:32:33 -- nvmf/common.sh@116 -- # sync 00:31:59.882 08:32:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:59.882 08:32:33 -- nvmf/common.sh@119 -- # set +e 00:31:59.882 08:32:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:59.882 08:32:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:59.882 rmmod nvme_tcp 00:31:59.882 rmmod nvme_fabrics 00:31:59.882 rmmod nvme_keyring 00:31:59.882 08:32:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:59.882 08:32:33 -- nvmf/common.sh@123 -- # set -e 00:31:59.882 08:32:33 -- nvmf/common.sh@124 -- # return 0 00:31:59.882 08:32:33 -- nvmf/common.sh@477 -- # '[' -n 2460882 ']' 00:31:59.882 08:32:33 -- nvmf/common.sh@478 -- # killprocess 2460882 00:31:59.882 08:32:33 -- common/autotest_common.sh@924 -- # '[' -z 2460882 ']' 00:31:59.882 08:32:33 -- common/autotest_common.sh@928 -- # kill -0 2460882 00:31:59.882 08:32:33 -- common/autotest_common.sh@929 -- # uname 00:31:59.882 08:32:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:59.882 08:32:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2460882 00:32:00.141 08:32:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:00.141 08:32:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:00.141 08:32:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2460882' 00:32:00.141 killing process with pid 2460882 00:32:00.141 08:32:33 -- common/autotest_common.sh@943 -- # kill 2460882 00:32:00.141 08:32:33 -- common/autotest_common.sh@948 -- # wait 2460882 00:32:00.141 08:32:33 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:00.141 08:32:33 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:02.679 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:02.938 Waiting for block devices as requested 00:32:03.198 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:03.198 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:03.198 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:03.456 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:03.456 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:03.456 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:03.456 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:03.716 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:03.716 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:03.716 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:03.976 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:03.976 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:03.976 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:03.976 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:04.235 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:04.235 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:04.235 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:04.494 08:32:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:04.494 08:32:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:04.494 08:32:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.494 08:32:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:04.494 08:32:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.494 08:32:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.494 08:32:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.399 08:32:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:06.399 00:32:06.399 real 1m14.733s 00:32:06.399 user 7m8.768s 00:32:06.399 sys 0m19.088s 00:32:06.399 08:32:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:06.399 08:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:06.399 ************************************ 00:32:06.399 END TEST nvmf_dif 00:32:06.399 ************************************ 00:32:06.399 08:32:40 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:06.399 08:32:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:06.399 08:32:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:06.399 08:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:06.399 ************************************ 00:32:06.399 START TEST nvmf_abort_qd_sizes 00:32:06.399 ************************************ 00:32:06.399 08:32:40 -- common/autotest_common.sh@1102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:06.659 * Looking for test storage... 00:32:06.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.659 08:32:40 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.659 08:32:40 -- nvmf/common.sh@7 -- # uname -s 00:32:06.659 08:32:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.659 08:32:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.659 08:32:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.659 08:32:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.659 08:32:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.659 08:32:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.659 08:32:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.659 08:32:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.659 08:32:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.659 08:32:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.659 08:32:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:06.659 08:32:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:06.659 08:32:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.659 08:32:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.659 08:32:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.659 08:32:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.659 08:32:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.659 08:32:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.659 08:32:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.659 08:32:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.659 08:32:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.659 08:32:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.659 08:32:40 -- paths/export.sh@5 -- # export PATH 00:32:06.659 08:32:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.659 08:32:40 -- nvmf/common.sh@46 -- # : 0 00:32:06.659 08:32:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:06.659 08:32:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:06.659 08:32:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:06.659 08:32:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.659 08:32:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.659 08:32:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:06.659 08:32:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:06.659 08:32:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:06.659 08:32:40 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:06.659 08:32:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:06.659 08:32:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.659 08:32:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:06.659 08:32:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:06.659 08:32:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:06.659 08:32:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.659 08:32:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:06.659 08:32:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.659 08:32:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:06.659 08:32:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:06.659 08:32:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:06.659 08:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:13.222 08:32:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:13.222 08:32:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:13.222 08:32:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:13.222 08:32:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:13.222 08:32:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:13.222 08:32:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:13.222 08:32:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:13.222 08:32:45 -- nvmf/common.sh@294 -- # net_devs=() 00:32:13.222 08:32:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:13.222 08:32:45 -- nvmf/common.sh@295 -- # e810=() 00:32:13.222 08:32:45 -- nvmf/common.sh@295 -- # local -ga e810 00:32:13.222 08:32:45 -- nvmf/common.sh@296 -- # x722=() 00:32:13.222 08:32:45 -- nvmf/common.sh@296 -- # local -ga x722 00:32:13.222 08:32:45 -- nvmf/common.sh@297 -- # mlx=() 00:32:13.222 08:32:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:13.222 08:32:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.222 08:32:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:13.222 08:32:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:13.222 08:32:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:13.222 08:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:13.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:13.222 08:32:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:13.222 08:32:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:13.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:13.222 08:32:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:13.222 08:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.222 08:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.222 08:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:13.222 Found net devices under 0000:af:00.0: cvl_0_0 00:32:13.222 08:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.222 08:32:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:13.222 08:32:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.222 08:32:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.222 08:32:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:13.222 Found net devices under 0000:af:00.1: cvl_0_1 00:32:13.222 08:32:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.222 08:32:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:13.222 08:32:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:13.222 08:32:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:13.222 08:32:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.222 08:32:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.222 08:32:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.222 08:32:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:13.222 08:32:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.222 08:32:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.222 08:32:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:13.222 08:32:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.222 08:32:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.222 08:32:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:13.222 08:32:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:13.222 08:32:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.222 08:32:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.222 08:32:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.222 08:32:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.222 08:32:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:13.222 08:32:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.222 08:32:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.222 08:32:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.222 08:32:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:13.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:32:13.222 00:32:13.222 --- 10.0.0.2 ping statistics --- 00:32:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.222 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:32:13.222 08:32:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:32:13.222 00:32:13.222 --- 10.0.0.1 ping statistics --- 00:32:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.222 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:13.222 08:32:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.222 08:32:46 -- nvmf/common.sh@410 -- # return 0 00:32:13.222 08:32:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:13.222 08:32:46 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:15.122 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:15.691 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:15.691 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:16.629 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:16.629 08:32:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.629 08:32:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:16.629 08:32:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:16.629 08:32:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.629 08:32:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:16.629 08:32:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:16.629 08:32:50 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:16.629 08:32:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:16.629 08:32:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:16.629 08:32:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.629 08:32:50 -- nvmf/common.sh@469 -- # nvmfpid=2478094 00:32:16.629 08:32:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:16.629 08:32:50 -- nvmf/common.sh@470 -- # waitforlisten 2478094 00:32:16.629 08:32:50 -- common/autotest_common.sh@817 -- # '[' -z 2478094 ']' 00:32:16.629 08:32:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.629 08:32:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:16.629 08:32:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.629 08:32:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:16.629 08:32:50 -- common/autotest_common.sh@10 -- # set +x 00:32:16.629 [2024-02-13 08:32:50.286170] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:32:16.629 [2024-02-13 08:32:50.286212] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.629 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.888 [2024-02-13 08:32:50.350537] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:16.888 [2024-02-13 08:32:50.428129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:16.888 [2024-02-13 08:32:50.428259] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.888 [2024-02-13 08:32:50.428266] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.888 [2024-02-13 08:32:50.428273] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.888 [2024-02-13 08:32:50.428308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.888 [2024-02-13 08:32:50.428413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:16.888 [2024-02-13 08:32:50.428479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.888 [2024-02-13 08:32:50.428480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.457 08:32:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:17.457 08:32:51 -- common/autotest_common.sh@850 -- # return 0 00:32:17.457 08:32:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:17.457 08:32:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:17.457 08:32:51 -- common/autotest_common.sh@10 -- # set +x 00:32:17.457 08:32:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:17.457 08:32:51 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:17.457 08:32:51 -- scripts/common.sh@312 -- # local nvmes 00:32:17.457 08:32:51 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:32:17.457 08:32:51 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:17.457 08:32:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:17.457 08:32:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:32:17.457 08:32:51 -- scripts/common.sh@322 -- # uname -s 00:32:17.457 08:32:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:17.457 08:32:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:17.457 08:32:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:17.457 08:32:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:32:17.457 08:32:51 -- scripts/common.sh@323 -- # continue 00:32:17.457 08:32:51 -- scripts/common.sh@327 -- # (( 1 )) 00:32:17.457 08:32:51 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:17.457 08:32:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:17.457 08:32:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:17.457 08:32:51 -- common/autotest_common.sh@10 -- # set +x 00:32:17.457 ************************************ 00:32:17.457 START TEST spdk_target_abort 00:32:17.457 ************************************ 00:32:17.457 08:32:51 -- common/autotest_common.sh@1102 -- # spdk_target 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:17.457 08:32:51 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:32:17.457 08:32:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:17.457 08:32:51 -- common/autotest_common.sh@10 -- # set +x 00:32:20.786 spdk_targetn1 00:32:20.786 08:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.787 08:32:53 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:20.787 08:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.787 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.787 [2024-02-13 08:32:53.977180] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.787 08:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.787 08:32:53 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:20.787 08:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.787 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.787 08:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.787 08:32:53 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:20.787 08:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.787 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.787 08:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:20.787 08:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:20.787 08:32:54 -- common/autotest_common.sh@10 -- # set +x 00:32:20.787 [2024-02-13 08:32:54.005985] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.787 08:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.787 08:32:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:20.787 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.077 Initializing NVMe Controllers 00:32:24.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:24.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:24.077 Initialization complete. Launching workers. 00:32:24.077 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 7426, failed: 0 00:32:24.077 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1472, failed to submit 5954 00:32:24.077 success 887, unsuccess 585, failed 0 00:32:24.077 08:32:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:24.077 08:32:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:24.077 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.369 Initializing NVMe Controllers 00:32:27.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:27.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:27.369 Initialization complete. Launching workers. 00:32:27.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8621, failed: 0 00:32:27.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1178, failed to submit 7443 00:32:27.369 success 349, unsuccess 829, failed 0 00:32:27.369 08:33:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.369 08:33:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:27.369 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.906 Initializing NVMe Controllers 00:32:29.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:29.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:29.906 Initialization complete. Launching workers. 00:32:29.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 37393, failed: 0 00:32:29.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2852, failed to submit 34541 00:32:29.906 success 657, unsuccess 2195, failed 0 00:32:29.906 08:33:03 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:29.906 08:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.906 08:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:29.906 08:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.906 08:33:03 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:29.906 08:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:29.906 08:33:03 -- common/autotest_common.sh@10 -- # set +x 00:32:31.286 08:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.286 08:33:04 -- target/abort_qd_sizes.sh@62 -- # killprocess 2478094 00:32:31.286 08:33:04 -- common/autotest_common.sh@924 -- # '[' -z 2478094 ']' 00:32:31.286 08:33:04 -- common/autotest_common.sh@928 -- # kill -0 2478094 00:32:31.286 08:33:04 -- common/autotest_common.sh@929 -- # uname 00:32:31.286 08:33:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:32:31.286 08:33:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 2478094 00:32:31.286 08:33:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:31.286 08:33:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:31.286 08:33:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 2478094' 00:32:31.286 killing process with pid 2478094 00:32:31.286 08:33:04 -- common/autotest_common.sh@943 -- # kill 2478094 00:32:31.286 08:33:04 -- common/autotest_common.sh@948 -- # wait 2478094 00:32:31.546 00:32:31.546 real 0m13.964s 00:32:31.546 user 0m55.485s 00:32:31.546 sys 0m2.216s 00:32:31.546 08:33:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:31.546 08:33:05 -- common/autotest_common.sh@10 -- # set +x 00:32:31.546 ************************************ 00:32:31.546 END TEST spdk_target_abort 00:32:31.546 ************************************ 00:32:31.546 08:33:05 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:31.546 08:33:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:31.546 08:33:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:31.546 08:33:05 -- common/autotest_common.sh@10 -- # set +x 00:32:31.546 ************************************ 00:32:31.546 START TEST kernel_target_abort 00:32:31.546 ************************************ 00:32:31.546 08:33:05 -- common/autotest_common.sh@1102 -- # kernel_target 00:32:31.546 08:33:05 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:31.546 08:33:05 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:31.546 08:33:05 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:31.546 08:33:05 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:31.546 08:33:05 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:31.546 08:33:05 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:31.546 08:33:05 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:31.546 08:33:05 -- nvmf/common.sh@627 -- # local block nvme 00:32:31.546 08:33:05 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:31.546 08:33:05 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:31.546 08:33:05 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:31.546 08:33:05 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:34.081 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:34.081 Waiting for block devices as requested 00:32:34.341 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:34.341 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:34.341 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:34.600 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:34.600 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:34.600 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:34.600 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:34.859 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:34.859 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:34.859 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:34.859 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:35.118 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:35.118 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:35.118 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:35.118 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:35.378 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:35.378 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:35.378 08:33:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:35.378 08:33:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:35.378 08:33:09 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:35.378 08:33:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:35.378 08:33:09 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:35.378 No valid GPT data, bailing 00:32:35.378 08:33:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:35.378 08:33:09 -- scripts/common.sh@393 -- # pt= 00:32:35.378 08:33:09 -- scripts/common.sh@394 -- # return 1 00:32:35.378 08:33:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:35.378 08:33:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:35.378 08:33:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:35.378 08:33:09 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:32:35.378 08:33:09 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:32:35.378 08:33:09 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:32:35.639 No valid GPT data, bailing 00:32:35.639 08:33:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:35.639 08:33:09 -- scripts/common.sh@393 -- # pt= 00:32:35.639 08:33:09 -- scripts/common.sh@394 -- # return 1 00:32:35.639 08:33:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:32:35.639 08:33:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:35.639 08:33:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:32:35.639 08:33:09 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:32:35.639 08:33:09 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:32:35.639 08:33:09 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n2 00:32:35.639 No valid GPT data, bailing 00:32:35.639 08:33:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:32:35.639 08:33:09 -- scripts/common.sh@393 -- # pt= 00:32:35.639 08:33:09 -- scripts/common.sh@394 -- # return 1 00:32:35.639 08:33:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:32:35.639 08:33:09 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n2 ]] 00:32:35.639 08:33:09 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:35.639 08:33:09 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:35.639 08:33:09 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:35.639 08:33:09 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:35.639 08:33:09 -- nvmf/common.sh@654 -- # echo 1 00:32:35.639 08:33:09 -- nvmf/common.sh@655 -- # echo /dev/nvme1n2 00:32:35.639 08:33:09 -- nvmf/common.sh@656 -- # echo 1 00:32:35.639 08:33:09 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:35.639 08:33:09 -- nvmf/common.sh@663 -- # echo tcp 00:32:35.639 08:33:09 -- nvmf/common.sh@664 -- # echo 4420 00:32:35.639 08:33:09 -- nvmf/common.sh@665 -- # echo ipv4 00:32:35.639 08:33:09 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:35.639 08:33:09 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:35.639 00:32:35.639 Discovery Log Number of Records 2, Generation counter 2 00:32:35.639 =====Discovery Log Entry 0====== 00:32:35.639 trtype: tcp 00:32:35.639 adrfam: ipv4 00:32:35.639 subtype: current discovery subsystem 00:32:35.639 treq: not specified, sq flow control disable supported 00:32:35.639 portid: 1 00:32:35.639 trsvcid: 4420 00:32:35.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:35.639 traddr: 10.0.0.1 00:32:35.639 eflags: none 00:32:35.639 sectype: none 00:32:35.639 =====Discovery Log Entry 1====== 00:32:35.639 trtype: tcp 00:32:35.639 adrfam: ipv4 00:32:35.639 subtype: nvme subsystem 00:32:35.639 treq: not specified, sq flow control disable supported 00:32:35.639 portid: 1 00:32:35.639 trsvcid: 4420 00:32:35.639 subnqn: kernel_target 00:32:35.639 traddr: 10.0.0.1 00:32:35.639 eflags: none 00:32:35.639 sectype: none 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:35.639 08:33:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:35.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.080 Initializing NVMe Controllers 00:32:39.080 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:39.080 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:39.080 Initialization complete. Launching workers. 00:32:39.080 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 10685, failed: 10675 00:32:39.080 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21360, failed to submit 0 00:32:39.080 success 0, unsuccess 21360, failed 0 00:32:39.080 08:33:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:39.080 08:33:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:39.080 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.375 Initializing NVMe Controllers 00:32:42.375 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:42.375 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:42.375 Initialization complete. Launching workers. 00:32:42.375 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 11112, failed: 11059 00:32:42.375 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22113, failed to submit 58 00:32:42.375 success 0, unsuccess 22113, failed 0 00:32:42.375 08:33:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:42.375 08:33:15 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:42.375 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.915 Initializing NVMe Controllers 00:32:44.915 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:44.915 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:44.915 Initialization complete. Launching workers. 00:32:44.915 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 11261, failed: 11226 00:32:44.915 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22416, failed to submit 71 00:32:44.915 success 0, unsuccess 22416, failed 0 00:32:44.915 08:33:18 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:44.915 08:33:18 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:44.915 08:33:18 -- nvmf/common.sh@677 -- # echo 0 00:32:44.915 08:33:18 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:44.915 08:33:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:44.915 08:33:18 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:44.915 08:33:18 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:44.915 08:33:18 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:44.915 08:33:18 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:45.175 00:32:45.175 real 0m13.465s 00:32:45.175 user 0m4.192s 00:32:45.175 sys 0m3.995s 00:32:45.175 08:33:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:45.175 08:33:18 -- common/autotest_common.sh@10 -- # set +x 00:32:45.175 ************************************ 00:32:45.175 END TEST kernel_target_abort 00:32:45.175 ************************************ 00:32:45.175 08:33:18 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:45.175 08:33:18 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:45.175 08:33:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:45.175 08:33:18 -- nvmf/common.sh@116 -- # sync 00:32:45.175 08:33:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:45.175 08:33:18 -- nvmf/common.sh@119 -- # set +e 00:32:45.175 08:33:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:45.175 08:33:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:45.175 rmmod nvme_tcp 00:32:45.175 rmmod nvme_fabrics 00:32:45.175 rmmod nvme_keyring 00:32:45.175 08:33:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:45.175 08:33:18 -- nvmf/common.sh@123 -- # set -e 00:32:45.175 08:33:18 -- nvmf/common.sh@124 -- # return 0 00:32:45.175 08:33:18 -- nvmf/common.sh@477 -- # '[' -n 2478094 ']' 00:32:45.175 08:33:18 -- nvmf/common.sh@478 -- # killprocess 2478094 00:32:45.175 08:33:18 -- common/autotest_common.sh@924 -- # '[' -z 2478094 ']' 00:32:45.175 08:33:18 -- common/autotest_common.sh@928 -- # kill -0 2478094 00:32:45.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 928: kill: (2478094) - No such process 00:32:45.175 08:33:18 -- common/autotest_common.sh@951 -- # echo 'Process with pid 2478094 is not found' 00:32:45.175 Process with pid 2478094 is not found 00:32:45.175 08:33:18 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:45.175 08:33:18 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:48.472 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:48.472 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:48.473 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:48.473 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:48.473 08:33:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:48.473 08:33:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:48.473 08:33:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.473 08:33:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:48.473 08:33:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.473 08:33:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:48.473 08:33:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.014 08:33:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:51.014 00:32:51.014 real 0m44.122s 00:32:51.014 user 1m4.178s 00:32:51.014 sys 0m15.279s 00:32:51.014 08:33:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.014 08:33:24 -- common/autotest_common.sh@10 -- # set +x 00:32:51.014 ************************************ 00:32:51.014 END TEST nvmf_abort_qd_sizes 00:32:51.014 ************************************ 00:32:51.014 08:33:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:51.014 08:33:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:51.014 08:33:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:51.014 08:33:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:51.014 08:33:24 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:51.014 08:33:24 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:51.014 08:33:24 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:51.014 08:33:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:51.014 08:33:24 -- common/autotest_common.sh@10 -- # set +x 00:32:51.014 08:33:24 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:51.014 08:33:24 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:32:51.014 08:33:24 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:32:51.014 08:33:24 -- common/autotest_common.sh@10 -- # set +x 00:32:55.213 INFO: APP EXITING 00:32:55.213 INFO: killing all VMs 00:32:55.213 INFO: killing vhost app 00:32:55.213 INFO: EXIT DONE 00:32:57.749 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:32:57.749 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:57.749 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:57.749 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:58.008 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:01.298 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:01.298 Cleaning 00:33:01.298 Removing: /var/run/dpdk/spdk0/config 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:01.298 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:01.298 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:01.298 Removing: /var/run/dpdk/spdk1/config 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:01.298 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:01.298 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:01.298 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:01.298 Removing: /var/run/dpdk/spdk2/config 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:01.298 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:01.298 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:01.298 Removing: /var/run/dpdk/spdk3/config 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:01.298 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:01.298 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:01.298 Removing: /var/run/dpdk/spdk4/config 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:01.298 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:01.298 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:01.298 Removing: /dev/shm/bdev_svc_trace.1 00:33:01.298 Removing: /dev/shm/nvmf_trace.0 00:33:01.298 Removing: /dev/shm/spdk_tgt_trace.pid2071365 00:33:01.298 Removing: /var/run/dpdk/spdk0 00:33:01.298 Removing: /var/run/dpdk/spdk1 00:33:01.298 Removing: /var/run/dpdk/spdk2 00:33:01.298 Removing: /var/run/dpdk/spdk3 00:33:01.298 Removing: /var/run/dpdk/spdk4 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2069108 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2070298 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2071365 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2072025 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2073534 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2074810 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2075092 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2075375 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2075672 00:33:01.298 Removing: /var/run/dpdk/spdk_pid2075962 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2076212 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2076461 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2076739 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2077483 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2080474 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2080739 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2081000 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2081283 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2081842 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2081861 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2082355 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2082579 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2083013 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2083254 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2083500 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2083735 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2084168 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2084358 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2084640 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2084912 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2085109 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2085170 00:33:01.299 Removing: /var/run/dpdk/spdk_pid2085401 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2085648 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2085883 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2086129 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2086361 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2086617 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2086847 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2087094 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2087332 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2087576 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2087807 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2088061 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2088293 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2088547 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2088778 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2089026 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2089263 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2089507 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2089738 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2089991 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2090222 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2090466 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2090701 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2090956 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2091185 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2091438 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2091669 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2091916 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2092151 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2092397 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2092629 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2092881 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2093113 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2093366 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2093607 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2093854 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2094094 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2094341 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2094572 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2094823 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2095100 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2095405 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2099518 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2184271 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2188796 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2198001 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2203818 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2208320 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2208998 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2218583 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2218926 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2223466 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2229597 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2232197 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2243144 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2252811 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2254574 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2255568 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2273744 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2278017 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2282880 00:33:01.558 Removing: /var/run/dpdk/spdk_pid2284630 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2286478 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2286722 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2286959 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2287171 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2287720 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2289566 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2290555 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2291058 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2297099 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2303070 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2308402 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2346147 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2350670 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2356978 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2358281 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2359803 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2364400 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2368775 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2376912 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2377037 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2382015 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2382250 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2382378 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2382742 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2382748 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2384148 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2385851 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2387581 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2389186 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2390798 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2392624 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2399065 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2399629 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2401141 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2401951 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2407931 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2410699 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2416372 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2422417 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2428616 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2429309 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2429972 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2430496 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2431467 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2432168 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2432777 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2433353 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2438161 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2438822 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2445057 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2445246 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2447467 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2455480 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2455485 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2461155 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2463125 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2465034 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2466145 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2468129 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2469208 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2478721 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2479185 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2479821 00:33:01.818 Removing: /var/run/dpdk/spdk_pid2482657 00:33:02.077 Removing: /var/run/dpdk/spdk_pid2483248 00:33:02.077 Removing: /var/run/dpdk/spdk_pid2483718 00:33:02.077 Clean 00:33:02.077 killing process with pid 2018818 00:33:10.198 killing process with pid 2018815 00:33:10.198 killing process with pid 2018817 00:33:10.198 killing process with pid 2018816 00:33:10.198 08:33:43 -- common/autotest_common.sh@1434 -- # return 0 00:33:10.198 08:33:43 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:33:10.198 08:33:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:10.198 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.198 08:33:43 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:33:10.198 08:33:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:10.198 08:33:43 -- common/autotest_common.sh@10 -- # set +x 00:33:10.198 08:33:43 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:10.198 08:33:43 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:10.198 08:33:43 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:10.198 08:33:43 -- spdk/autotest.sh@394 -- # hash lcov 00:33:10.198 08:33:43 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:10.198 08:33:43 -- spdk/autotest.sh@396 -- # hostname 00:33:10.198 08:33:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:10.198 geninfo: WARNING: invalid characters removed from testname! 00:33:28.293 08:33:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.553 08:34:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.531 08:34:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:31.912 08:34:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.819 08:34:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:35.196 08:34:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:36.574 08:34:10 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:36.833 08:34:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.833 08:34:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:36.833 08:34:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.833 08:34:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.833 08:34:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.833 08:34:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.833 08:34:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.833 08:34:10 -- paths/export.sh@5 -- $ export PATH 00:33:36.833 08:34:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.833 08:34:10 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:36.833 08:34:10 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:36.833 08:34:10 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707809650.XXXXXX 00:33:36.833 08:34:10 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707809650.pFvMYV 00:33:36.833 08:34:10 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:36.833 08:34:10 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:36.833 08:34:10 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:36.833 08:34:10 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:36.834 08:34:10 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:36.834 08:34:10 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:36.834 08:34:10 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:33:36.834 08:34:10 -- common/autotest_common.sh@10 -- $ set +x 00:33:36.834 08:34:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:33:36.834 08:34:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:33:36.834 08:34:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:36.834 08:34:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:36.834 08:34:10 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:36.834 08:34:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:36.834 08:34:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:36.834 08:34:10 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:36.834 08:34:10 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:36.834 08:34:10 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:36.834 08:34:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:36.834 + [[ -n 1976500 ]] 00:33:36.834 + sudo kill 1976500 00:33:36.843 [Pipeline] } 00:33:36.860 [Pipeline] // stage 00:33:36.864 [Pipeline] } 00:33:36.880 [Pipeline] // timeout 00:33:36.884 [Pipeline] } 00:33:36.899 [Pipeline] // catchError 00:33:36.903 [Pipeline] } 00:33:36.918 [Pipeline] // wrap 00:33:36.923 [Pipeline] } 00:33:36.936 [Pipeline] // catchError 00:33:36.944 [Pipeline] stage 00:33:36.945 [Pipeline] { (Epilogue) 00:33:36.957 [Pipeline] catchError 00:33:36.958 [Pipeline] { 00:33:36.971 [Pipeline] echo 00:33:36.972 Cleanup processes 00:33:36.976 [Pipeline] sh 00:33:37.257 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.257 2497143 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.270 [Pipeline] sh 00:33:37.551 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.552 ++ grep -v 'sudo pgrep' 00:33:37.552 ++ awk '{print $1}' 00:33:37.552 + sudo kill -9 00:33:37.552 + true 00:33:37.563 [Pipeline] sh 00:33:37.847 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:47.847 [Pipeline] sh 00:33:48.159 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:48.159 Artifacts sizes are good 00:33:48.174 [Pipeline] archiveArtifacts 00:33:48.181 Archiving artifacts 00:33:48.396 [Pipeline] sh 00:33:48.681 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:48.725 [Pipeline] cleanWs 00:33:48.734 [WS-CLEANUP] Deleting project workspace... 00:33:48.734 [WS-CLEANUP] Deferred wipeout is used... 00:33:48.740 [WS-CLEANUP] done 00:33:48.742 [Pipeline] } 00:33:48.761 [Pipeline] // catchError 00:33:48.773 [Pipeline] sh 00:33:49.055 + logger -p user.info -t JENKINS-CI 00:33:49.065 [Pipeline] } 00:33:49.080 [Pipeline] // stage 00:33:49.086 [Pipeline] } 00:33:49.102 [Pipeline] // node 00:33:49.107 [Pipeline] End of Pipeline 00:33:49.182 Finished: SUCCESS